00:00:00.001 Started by upstream project "autotest-per-patch" build number 132302 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.063 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.091 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.149 Using shallow fetch with depth 1 00:00:00.149 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.149 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.318 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.330 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.343 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.343 > git config core.sparsecheckout # timeout=10 00:00:05.353 > git read-tree -mu HEAD # timeout=10 00:00:05.370 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.390 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.390 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.499 [Pipeline] Start of Pipeline 00:00:05.510 [Pipeline] library 00:00:05.511 Loading library shm_lib@master 00:00:05.511 Library shm_lib@master is cached. Copying from home. 00:00:05.525 [Pipeline] node 00:00:05.533 Running on VM-host-SM16 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.535 [Pipeline] { 00:00:05.543 [Pipeline] catchError 00:00:05.544 [Pipeline] { 00:00:05.555 [Pipeline] wrap 00:00:05.565 [Pipeline] { 00:00:05.573 [Pipeline] stage 00:00:05.575 [Pipeline] { (Prologue) 00:00:05.598 [Pipeline] echo 00:00:05.600 Node: VM-host-SM16 00:00:05.607 [Pipeline] cleanWs 00:00:05.615 [WS-CLEANUP] Deleting project workspace... 00:00:05.615 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.621 [WS-CLEANUP] done 00:00:05.863 [Pipeline] setCustomBuildProperty 00:00:05.955 [Pipeline] httpRequest 00:00:06.942 [Pipeline] echo 00:00:06.943 Sorcerer 10.211.164.101 is alive 00:00:06.952 [Pipeline] retry 00:00:06.954 [Pipeline] { 00:00:06.964 [Pipeline] httpRequest 00:00:06.968 HttpMethod: GET 00:00:06.969 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.969 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.982 Response Code: HTTP/1.1 200 OK 00:00:06.983 Success: Status code 200 is in the accepted range: 200,404 00:00:06.983 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.109 [Pipeline] } 00:00:10.126 [Pipeline] // retry 00:00:10.134 [Pipeline] sh 00:00:10.413 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.428 [Pipeline] httpRequest 00:00:11.208 [Pipeline] echo 00:00:11.210 Sorcerer 10.211.164.101 is alive 00:00:11.219 [Pipeline] retry 00:00:11.221 [Pipeline] { 00:00:11.232 [Pipeline] httpRequest 00:00:11.235 HttpMethod: GET 00:00:11.236 URL: http://10.211.164.101/packages/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:00:11.237 Sending request to url: http://10.211.164.101/packages/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:00:11.249 Response Code: HTTP/1.1 200 OK 00:00:11.250 Success: Status code 200 is in the accepted range: 200,404 00:00:11.251 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:01:06.166 [Pipeline] } 00:01:06.181 [Pipeline] // retry 00:01:06.188 [Pipeline] sh 00:01:06.468 + tar --no-same-owner -xf spdk_514198259dd4c8bcbf912c664217c4907cf2b670.tar.gz 00:01:09.760 [Pipeline] sh 00:01:10.034 + git -C spdk log --oneline -n5 00:01:10.034 514198259 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:10.034 59da1a1d7 nvmf: Expose DIF type of namespace to host again 00:01:10.034 9a34ab7f7 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:10.034 b0a35519c nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:10.034 dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:10.050 [Pipeline] writeFile 00:01:10.066 [Pipeline] sh 00:01:10.348 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:10.359 [Pipeline] sh 00:01:10.638 + cat autorun-spdk.conf 00:01:10.638 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.638 SPDK_TEST_NVME=1 00:01:10.638 SPDK_TEST_FTL=1 00:01:10.638 SPDK_TEST_ISAL=1 00:01:10.638 SPDK_RUN_ASAN=1 00:01:10.638 SPDK_RUN_UBSAN=1 00:01:10.638 SPDK_TEST_XNVME=1 00:01:10.638 SPDK_TEST_NVME_FDP=1 00:01:10.638 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.645 RUN_NIGHTLY=0 00:01:10.647 [Pipeline] } 00:01:10.663 [Pipeline] // stage 00:01:10.679 [Pipeline] stage 00:01:10.681 [Pipeline] { (Run VM) 00:01:10.694 [Pipeline] sh 00:01:10.974 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:10.974 + echo 'Start stage prepare_nvme.sh' 00:01:10.974 Start stage prepare_nvme.sh 00:01:10.974 + [[ -n 0 ]] 00:01:10.974 + disk_prefix=ex0 00:01:10.974 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:10.974 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:10.974 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:10.974 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.974 ++ SPDK_TEST_NVME=1 00:01:10.974 ++ SPDK_TEST_FTL=1 00:01:10.974 ++ SPDK_TEST_ISAL=1 00:01:10.974 ++ SPDK_RUN_ASAN=1 00:01:10.974 ++ SPDK_RUN_UBSAN=1 00:01:10.974 ++ SPDK_TEST_XNVME=1 00:01:10.974 ++ SPDK_TEST_NVME_FDP=1 00:01:10.974 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:10.974 ++ RUN_NIGHTLY=0 00:01:10.974 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:10.974 + nvme_files=() 00:01:10.974 + declare -A nvme_files 00:01:10.974 + backend_dir=/var/lib/libvirt/images/backends 00:01:10.974 + nvme_files['nvme.img']=5G 00:01:10.974 + nvme_files['nvme-cmb.img']=5G 00:01:10.974 + nvme_files['nvme-multi0.img']=4G 00:01:10.974 + nvme_files['nvme-multi1.img']=4G 00:01:10.974 + nvme_files['nvme-multi2.img']=4G 00:01:10.974 + nvme_files['nvme-openstack.img']=8G 00:01:10.974 + nvme_files['nvme-zns.img']=5G 00:01:10.974 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:10.974 + (( SPDK_TEST_FTL == 1 )) 00:01:10.974 + nvme_files["nvme-ftl.img"]=6G 00:01:10.974 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:10.974 + nvme_files["nvme-fdp.img"]=1G 00:01:10.974 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:10.974 + for nvme in "${!nvme_files[@]}" 00:01:10.974 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:10.974 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:10.974 + for nvme in "${!nvme_files[@]}" 00:01:10.974 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:10.974 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:10.974 + for nvme in "${!nvme_files[@]}" 00:01:10.974 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:11.967 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:11.967 + for nvme in "${!nvme_files[@]}" 00:01:11.967 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:12.902 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.902 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:12.902 + echo 'End stage prepare_nvme.sh' 00:01:12.902 End stage prepare_nvme.sh 00:01:12.913 [Pipeline] sh 00:01:13.194 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.194 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:13.194 00:01:13.194 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:13.195 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:13.195 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:13.195 HELP=0 00:01:13.195 DRY_RUN=0 00:01:13.195 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:13.195 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:13.195 NVME_AUTO_CREATE=0 00:01:13.195 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:13.195 NVME_CMB=,,,, 00:01:13.195 NVME_PMR=,,,, 00:01:13.195 NVME_ZNS=,,,, 00:01:13.195 NVME_MS=true,,,, 00:01:13.195 NVME_FDP=,,,on, 00:01:13.195 SPDK_VAGRANT_DISTRO=fedora39 00:01:13.195 SPDK_VAGRANT_VMCPU=10 00:01:13.195 SPDK_VAGRANT_VMRAM=12288 00:01:13.195 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.195 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.195 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.195 SPDK_OPENSTACK_NETWORK=0 00:01:13.195 VAGRANT_PACKAGE_BOX=0 00:01:13.195 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:13.195 FORCE_DISTRO=true 00:01:13.195 VAGRANT_BOX_VERSION= 00:01:13.195 EXTRA_VAGRANTFILES= 00:01:13.195 NIC_MODEL=e1000 00:01:13.195 00:01:13.195 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:13.195 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:16.480 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.425 ==> default: Creating image (snapshot of base box volume). 00:01:17.684 ==> default: Creating domain with the following settings... 00:01:17.685 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731669180_25b719fb28d3102468c8 00:01:17.685 ==> default: -- Domain type: kvm 00:01:17.685 ==> default: -- Cpus: 10 00:01:17.685 ==> default: -- Feature: acpi 00:01:17.685 ==> default: -- Feature: apic 00:01:17.685 ==> default: -- Feature: pae 00:01:17.685 ==> default: -- Memory: 12288M 00:01:17.685 ==> default: -- Memory Backing: hugepages: 00:01:17.685 ==> default: -- Management MAC: 00:01:17.685 ==> default: -- Loader: 00:01:17.685 ==> default: -- Nvram: 00:01:17.685 ==> default: -- Base box: spdk/fedora39 00:01:17.685 ==> default: -- Storage pool: default 00:01:17.685 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731669180_25b719fb28d3102468c8.img (20G) 00:01:17.685 ==> default: -- Volume Cache: default 00:01:17.685 ==> default: -- Kernel: 00:01:17.685 ==> default: -- Initrd: 00:01:17.685 ==> default: -- Graphics Type: vnc 00:01:17.685 ==> default: -- Graphics Port: -1 00:01:17.685 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.685 ==> default: -- Graphics Password: Not defined 00:01:17.685 ==> default: -- Video Type: cirrus 00:01:17.685 ==> default: -- Video VRAM: 9216 00:01:17.685 ==> default: -- Sound Type: 00:01:17.685 ==> default: -- Keymap: en-us 00:01:17.685 ==> default: -- TPM Path: 00:01:17.685 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.685 ==> default: -- Command line args: 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:17.685 ==> default: -> value=-drive, 00:01:17.685 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:17.685 ==> default: -> value=-device, 00:01:17.685 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.685 ==> default: Creating shared folders metadata... 00:01:17.685 ==> default: Starting domain. 00:01:19.585 ==> default: Waiting for domain to get an IP address... 00:01:34.460 ==> default: Waiting for SSH to become available... 00:01:35.834 ==> default: Configuring and enabling network interfaces... 00:01:41.121 default: SSH address: 192.168.121.90:22 00:01:41.121 default: SSH username: vagrant 00:01:41.121 default: SSH auth method: private key 00:01:43.021 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:51.131 ==> default: Mounting SSHFS shared folder... 00:01:52.063 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:52.063 ==> default: Checking Mount.. 00:01:53.440 ==> default: Folder Successfully Mounted! 00:01:53.440 ==> default: Running provisioner: file... 00:01:54.008 default: ~/.gitconfig => .gitconfig 00:01:54.577 00:01:54.577 SUCCESS! 00:01:54.577 00:01:54.577 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:54.577 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:54.577 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:54.577 00:01:54.587 [Pipeline] } 00:01:54.602 [Pipeline] // stage 00:01:54.613 [Pipeline] dir 00:01:54.614 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:54.615 [Pipeline] { 00:01:54.628 [Pipeline] catchError 00:01:54.630 [Pipeline] { 00:01:54.644 [Pipeline] sh 00:01:54.924 + vagrant ssh-config --host+ vagrant 00:01:54.924 sed -ne /^Host/,$p 00:01:54.924 + tee ssh_conf 00:01:59.111 Host vagrant 00:01:59.111 HostName 192.168.121.90 00:01:59.111 User vagrant 00:01:59.111 Port 22 00:01:59.111 UserKnownHostsFile /dev/null 00:01:59.111 StrictHostKeyChecking no 00:01:59.111 PasswordAuthentication no 00:01:59.111 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:59.111 IdentitiesOnly yes 00:01:59.111 LogLevel FATAL 00:01:59.111 ForwardAgent yes 00:01:59.111 ForwardX11 yes 00:01:59.111 00:01:59.125 [Pipeline] withEnv 00:01:59.127 [Pipeline] { 00:01:59.142 [Pipeline] sh 00:01:59.426 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:59.426 source /etc/os-release 00:01:59.426 [[ -e /image.version ]] && img=$(< /image.version) 00:01:59.426 # Minimal, systemd-like check. 00:01:59.426 if [[ -e /.dockerenv ]]; then 00:01:59.426 # Clear garbage from the node's name: 00:01:59.426 # agt-er_autotest_547-896 -> autotest_547-896 00:01:59.426 # $HOSTNAME is the actual container id 00:01:59.426 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:59.426 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:59.426 # We can assume this is a mount from a host where container is running, 00:01:59.426 # so fetch its hostname to easily identify the target swarm worker. 00:01:59.426 container="$(< /etc/hostname) ($agent)" 00:01:59.426 else 00:01:59.426 # Fallback 00:01:59.426 container=$agent 00:01:59.426 fi 00:01:59.426 fi 00:01:59.426 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:59.426 00:01:59.695 [Pipeline] } 00:01:59.711 [Pipeline] // withEnv 00:01:59.718 [Pipeline] setCustomBuildProperty 00:01:59.733 [Pipeline] stage 00:01:59.734 [Pipeline] { (Tests) 00:01:59.750 [Pipeline] sh 00:02:00.063 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:00.335 [Pipeline] sh 00:02:00.613 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:00.884 [Pipeline] timeout 00:02:00.885 Timeout set to expire in 50 min 00:02:00.886 [Pipeline] { 00:02:00.899 [Pipeline] sh 00:02:01.176 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:01.743 HEAD is now at 514198259 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:01.756 [Pipeline] sh 00:02:02.034 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:02.303 [Pipeline] sh 00:02:02.582 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:02.855 [Pipeline] sh 00:02:03.133 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:03.392 ++ readlink -f spdk_repo 00:02:03.392 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.392 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.392 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.392 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.392 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.392 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.392 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.392 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:03.392 + cd /home/vagrant/spdk_repo 00:02:03.392 + source /etc/os-release 00:02:03.392 ++ NAME='Fedora Linux' 00:02:03.392 ++ VERSION='39 (Cloud Edition)' 00:02:03.392 ++ ID=fedora 00:02:03.392 ++ VERSION_ID=39 00:02:03.392 ++ VERSION_CODENAME= 00:02:03.392 ++ PLATFORM_ID=platform:f39 00:02:03.392 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:03.392 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:03.392 ++ LOGO=fedora-logo-icon 00:02:03.392 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:03.392 ++ HOME_URL=https://fedoraproject.org/ 00:02:03.392 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:03.392 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:03.392 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:03.392 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:03.392 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:03.392 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:03.392 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:03.392 ++ SUPPORT_END=2024-11-12 00:02:03.392 ++ VARIANT='Cloud Edition' 00:02:03.392 ++ VARIANT_ID=cloud 00:02:03.392 + uname -a 00:02:03.392 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:03.392 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:03.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:03.910 Hugepages 00:02:03.910 node hugesize free / total 00:02:03.910 node0 1048576kB 0 / 0 00:02:03.910 node0 2048kB 0 / 0 00:02:03.910 00:02:03.910 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.910 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:03.910 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:04.169 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:04.169 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:04.169 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:04.169 + rm -f /tmp/spdk-ld-path 00:02:04.169 + source autorun-spdk.conf 00:02:04.169 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.169 ++ SPDK_TEST_NVME=1 00:02:04.169 ++ SPDK_TEST_FTL=1 00:02:04.169 ++ SPDK_TEST_ISAL=1 00:02:04.169 ++ SPDK_RUN_ASAN=1 00:02:04.169 ++ SPDK_RUN_UBSAN=1 00:02:04.169 ++ SPDK_TEST_XNVME=1 00:02:04.169 ++ SPDK_TEST_NVME_FDP=1 00:02:04.169 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.169 ++ RUN_NIGHTLY=0 00:02:04.169 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.169 + [[ -n '' ]] 00:02:04.169 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.169 + for M in /var/spdk/build-*-manifest.txt 00:02:04.169 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:04.169 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.169 + for M in /var/spdk/build-*-manifest.txt 00:02:04.169 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.169 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.169 + for M in /var/spdk/build-*-manifest.txt 00:02:04.169 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.169 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.169 ++ uname 00:02:04.169 + [[ Linux == \L\i\n\u\x ]] 00:02:04.169 + sudo dmesg -T 00:02:04.169 + sudo dmesg --clear 00:02:04.169 + dmesg_pid=5408 00:02:04.169 + sudo dmesg -Tw 00:02:04.169 + [[ Fedora Linux == FreeBSD ]] 00:02:04.169 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.169 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.169 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.169 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.169 + export FIO_BIN=/usr/src/fio-static/fio 00:02:04.169 + FIO_BIN=/usr/src/fio-static/fio 00:02:04.169 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.169 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.169 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.169 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.169 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:04.169 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.169 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.169 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:04.169 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.169 11:13:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:04.169 11:13:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.169 11:13:47 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:04.169 11:13:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:04.169 11:13:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.428 11:13:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:04.428 11:13:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.428 11:13:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:04.428 11:13:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.428 11:13:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.428 11:13:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.428 11:13:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.428 11:13:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.428 11:13:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.428 11:13:47 -- paths/export.sh@5 -- $ export PATH 00:02:04.429 11:13:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:04.429 11:13:47 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.429 11:13:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:04.429 11:13:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731669227.XXXXXX 00:02:04.429 11:13:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731669227.h5S5nM 00:02:04.429 11:13:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:04.429 11:13:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:04.429 11:13:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.429 11:13:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.429 11:13:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.429 11:13:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:04.429 11:13:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:04.429 11:13:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.429 11:13:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:04.429 11:13:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:04.429 11:13:47 -- pm/common@17 -- $ local monitor 00:02:04.429 11:13:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.429 11:13:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.429 11:13:47 -- pm/common@25 -- $ sleep 1 00:02:04.429 11:13:47 -- pm/common@21 -- $ date +%s 00:02:04.429 11:13:47 -- pm/common@21 -- $ date +%s 00:02:04.429 11:13:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731669227 00:02:04.429 11:13:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731669227 00:02:04.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731669227_collect-cpu-load.pm.log 00:02:04.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731669227_collect-vmstat.pm.log 00:02:05.363 11:13:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:05.363 11:13:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.363 11:13:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.363 11:13:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.363 11:13:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.363 Fri Nov 15 11:13:48 AM UTC 2024 00:02:05.363 11:13:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.363 v25.01-pre-215-g514198259 00:02:05.363 11:13:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:05.363 11:13:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:05.363 11:13:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:05.363 11:13:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:05.363 11:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.363 ************************************ 00:02:05.363 START TEST asan 00:02:05.363 ************************************ 00:02:05.363 using asan 00:02:05.363 11:13:48 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:05.363 00:02:05.363 real 0m0.000s 00:02:05.363 user 0m0.000s 00:02:05.363 sys 0m0.000s 00:02:05.363 11:13:48 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:05.363 ************************************ 00:02:05.363 END TEST asan 00:02:05.363 ************************************ 00:02:05.363 11:13:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.363 11:13:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:05.363 11:13:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:05.363 11:13:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:05.363 11:13:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:05.363 11:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.363 ************************************ 00:02:05.364 START TEST ubsan 00:02:05.364 ************************************ 00:02:05.364 using ubsan 00:02:05.364 11:13:48 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:05.364 00:02:05.364 real 0m0.000s 00:02:05.364 user 0m0.000s 00:02:05.364 sys 0m0.000s 00:02:05.364 11:13:48 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:05.364 11:13:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:05.364 ************************************ 00:02:05.364 END TEST ubsan 00:02:05.364 ************************************ 00:02:05.621 11:13:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:05.621 11:13:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:05.621 11:13:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:05.621 11:13:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:05.621 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:05.621 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.188 Using 'verbs' RDMA provider 00:02:21.995 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:34.205 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.205 Creating mk/config.mk...done. 00:02:34.205 Creating mk/cc.flags.mk...done. 00:02:34.205 Type 'make' to build. 00:02:34.205 11:14:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:34.205 11:14:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:34.205 11:14:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:34.205 11:14:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.205 ************************************ 00:02:34.205 START TEST make 00:02:34.205 ************************************ 00:02:34.205 11:14:15 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:34.205 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:34.205 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:34.205 meson setup builddir \ 00:02:34.205 -Dwith-libaio=enabled \ 00:02:34.205 -Dwith-liburing=enabled \ 00:02:34.205 -Dwith-libvfn=disabled \ 00:02:34.205 -Dwith-spdk=disabled \ 00:02:34.205 -Dexamples=false \ 00:02:34.205 -Dtests=false \ 00:02:34.205 -Dtools=false && \ 00:02:34.205 meson compile -C builddir && \ 00:02:34.205 cd -) 00:02:34.205 make[1]: Nothing to be done for 'all'. 00:02:35.581 The Meson build system 00:02:35.581 Version: 1.5.0 00:02:35.581 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:35.581 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.581 Build type: native build 00:02:35.581 Project name: xnvme 00:02:35.581 Project version: 0.7.5 00:02:35.581 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:35.581 C linker for the host machine: cc ld.bfd 2.40-14 00:02:35.581 Host machine cpu family: x86_64 00:02:35.581 Host machine cpu: x86_64 00:02:35.581 Message: host_machine.system: linux 00:02:35.581 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:35.581 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:35.581 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:35.581 Run-time dependency threads found: YES 00:02:35.581 Has header "setupapi.h" : NO 00:02:35.581 Has header "linux/blkzoned.h" : YES 00:02:35.581 Has header "linux/blkzoned.h" : YES (cached) 00:02:35.581 Has header "libaio.h" : YES 00:02:35.581 Library aio found: YES 00:02:35.581 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:35.581 Run-time dependency liburing found: YES 2.2 00:02:35.581 Dependency libvfn skipped: feature with-libvfn disabled 00:02:35.581 Found CMake: /usr/bin/cmake (3.27.7) 00:02:35.581 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:35.581 Subproject spdk : skipped: feature with-spdk disabled 00:02:35.581 Run-time dependency appleframeworks found: NO (tried framework) 00:02:35.581 Run-time dependency appleframeworks found: NO (tried framework) 00:02:35.581 Library rt found: YES 00:02:35.582 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:35.582 Configuring xnvme_config.h using configuration 00:02:35.582 Configuring xnvme.spec using configuration 00:02:35.582 Run-time dependency bash-completion found: YES 2.11 00:02:35.582 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:35.582 Program cp found: YES (/usr/bin/cp) 00:02:35.582 Build targets in project: 3 00:02:35.582 00:02:35.582 xnvme 0.7.5 00:02:35.582 00:02:35.582 Subprojects 00:02:35.582 spdk : NO Feature 'with-spdk' disabled 00:02:35.582 00:02:35.582 User defined options 00:02:35.582 examples : false 00:02:35.582 tests : false 00:02:35.582 tools : false 00:02:35.582 with-libaio : enabled 00:02:35.582 with-liburing: enabled 00:02:35.582 with-libvfn : disabled 00:02:35.582 with-spdk : disabled 00:02:35.582 00:02:35.582 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.148 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:36.148 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:36.148 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:36.148 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:36.148 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:36.148 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:36.148 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:36.148 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:36.148 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:36.148 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:36.148 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:36.407 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:36.407 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:36.407 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:36.407 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:36.407 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:36.407 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:36.407 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:36.407 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:36.407 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:36.407 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:36.407 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:36.407 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:36.407 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:36.407 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:36.407 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:36.407 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:36.667 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:36.667 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:36.667 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:36.667 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:36.667 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:36.667 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:36.667 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:36.667 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:36.667 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:36.667 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:36.667 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:36.667 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:36.667 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:36.667 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:36.667 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:36.667 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:36.667 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:36.667 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:36.667 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:36.667 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:36.667 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:36.667 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:36.667 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:36.667 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:36.667 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:36.667 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:36.667 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:36.667 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:36.667 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:36.925 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:36.925 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:36.925 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:36.925 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:36.925 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:36.925 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:36.925 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:36.925 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:36.925 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:36.925 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:36.926 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:36.926 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:36.926 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:36.926 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:36.926 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:36.926 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:36.926 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:37.184 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:37.442 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:37.442 [75/76] Linking static target lib/libxnvme.a 00:02:37.442 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:37.442 INFO: autodetecting backend as ninja 00:02:37.442 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:37.701 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:45.812 The Meson build system 00:02:45.812 Version: 1.5.0 00:02:45.812 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.812 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.812 Build type: native build 00:02:45.812 Program cat found: YES (/usr/bin/cat) 00:02:45.812 Project name: DPDK 00:02:45.812 Project version: 24.03.0 00:02:45.812 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.812 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.812 Host machine cpu family: x86_64 00:02:45.812 Host machine cpu: x86_64 00:02:45.812 Message: ## Building in Developer Mode ## 00:02:45.812 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.812 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.812 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.812 Program python3 found: YES (/usr/bin/python3) 00:02:45.812 Program cat found: YES (/usr/bin/cat) 00:02:45.812 Compiler for C supports arguments -march=native: YES 00:02:45.812 Checking for size of "void *" : 8 00:02:45.812 Checking for size of "void *" : 8 (cached) 00:02:45.812 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:45.812 Library m found: YES 00:02:45.812 Library numa found: YES 00:02:45.812 Has header "numaif.h" : YES 00:02:45.812 Library fdt found: NO 00:02:45.812 Library execinfo found: NO 00:02:45.812 Has header "execinfo.h" : YES 00:02:45.812 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.812 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.812 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.813 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.813 Run-time dependency openssl found: YES 3.1.1 00:02:45.813 Run-time dependency libpcap found: YES 1.10.4 00:02:45.813 Has header "pcap.h" with dependency libpcap: YES 00:02:45.813 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.813 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.813 Compiler for C supports arguments -Wformat: YES 00:02:45.813 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.813 Compiler for C supports arguments -Wformat-security: NO 00:02:45.813 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.813 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.813 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.813 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.813 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.813 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.813 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.813 Compiler for C supports arguments -Wundef: YES 00:02:45.813 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.813 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.813 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.813 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.813 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.813 Program objdump found: YES (/usr/bin/objdump) 00:02:45.813 Compiler for C supports arguments -mavx512f: YES 00:02:45.813 Checking if "AVX512 checking" compiles: YES 00:02:45.813 Fetching value of define "__SSE4_2__" : 1 00:02:45.813 Fetching value of define "__AES__" : 1 00:02:45.813 Fetching value of define "__AVX__" : 1 00:02:45.813 Fetching value of define "__AVX2__" : 1 00:02:45.813 Fetching value of define "__AVX512BW__" : (undefined) 00:02:45.813 Fetching value of define "__AVX512CD__" : (undefined) 00:02:45.813 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:45.813 Fetching value of define "__AVX512F__" : (undefined) 00:02:45.813 Fetching value of define "__AVX512VL__" : (undefined) 00:02:45.813 Fetching value of define "__PCLMUL__" : 1 00:02:45.813 Fetching value of define "__RDRND__" : 1 00:02:45.813 Fetching value of define "__RDSEED__" : 1 00:02:45.813 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.813 Fetching value of define "__znver1__" : (undefined) 00:02:45.813 Fetching value of define "__znver2__" : (undefined) 00:02:45.813 Fetching value of define "__znver3__" : (undefined) 00:02:45.813 Fetching value of define "__znver4__" : (undefined) 00:02:45.813 Library asan found: YES 00:02:45.813 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.813 Message: lib/log: Defining dependency "log" 00:02:45.813 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.813 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.813 Library rt found: YES 00:02:45.813 Checking for function "getentropy" : NO 00:02:45.813 Message: lib/eal: Defining dependency "eal" 00:02:45.813 Message: lib/ring: Defining dependency "ring" 00:02:45.813 Message: lib/rcu: Defining dependency "rcu" 00:02:45.813 Message: lib/mempool: Defining dependency "mempool" 00:02:45.813 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.813 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.813 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:45.813 Compiler for C supports arguments -mpclmul: YES 00:02:45.813 Compiler for C supports arguments -maes: YES 00:02:45.813 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.813 Compiler for C supports arguments -mavx512bw: YES 00:02:45.813 Compiler for C supports arguments -mavx512dq: YES 00:02:45.813 Compiler for C supports arguments -mavx512vl: YES 00:02:45.813 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.813 Compiler for C supports arguments -mavx2: YES 00:02:45.813 Compiler for C supports arguments -mavx: YES 00:02:45.813 Message: lib/net: Defining dependency "net" 00:02:45.813 Message: lib/meter: Defining dependency "meter" 00:02:45.813 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.813 Message: lib/pci: Defining dependency "pci" 00:02:45.813 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.813 Message: lib/hash: Defining dependency "hash" 00:02:45.813 Message: lib/timer: Defining dependency "timer" 00:02:45.813 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.813 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.813 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.813 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.813 Message: lib/power: Defining dependency "power" 00:02:45.813 Message: lib/reorder: Defining dependency "reorder" 00:02:45.813 Message: lib/security: Defining dependency "security" 00:02:45.813 Has header "linux/userfaultfd.h" : YES 00:02:45.813 Has header "linux/vduse.h" : YES 00:02:45.813 Message: lib/vhost: Defining dependency "vhost" 00:02:45.813 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.813 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.813 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.813 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.813 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.813 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.813 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.813 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.813 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.813 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.813 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:45.813 Configuring doxy-api-html.conf using configuration 00:02:45.813 Configuring doxy-api-man.conf using configuration 00:02:45.813 Program mandb found: YES (/usr/bin/mandb) 00:02:45.813 Program sphinx-build found: NO 00:02:45.813 Configuring rte_build_config.h using configuration 00:02:45.813 Message: 00:02:45.813 ================= 00:02:45.813 Applications Enabled 00:02:45.813 ================= 00:02:45.813 00:02:45.813 apps: 00:02:45.813 00:02:45.813 00:02:45.813 Message: 00:02:45.813 ================= 00:02:45.813 Libraries Enabled 00:02:45.813 ================= 00:02:45.813 00:02:45.813 libs: 00:02:45.813 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.813 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.813 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.813 00:02:45.813 Message: 00:02:45.813 =============== 00:02:45.813 Drivers Enabled 00:02:45.813 =============== 00:02:45.813 00:02:45.813 common: 00:02:45.813 00:02:45.813 bus: 00:02:45.813 pci, vdev, 00:02:45.813 mempool: 00:02:45.813 ring, 00:02:45.813 dma: 00:02:45.813 00:02:45.813 net: 00:02:45.813 00:02:45.813 crypto: 00:02:45.813 00:02:45.813 compress: 00:02:45.813 00:02:45.813 vdpa: 00:02:45.813 00:02:45.813 00:02:45.813 Message: 00:02:45.813 ================= 00:02:45.813 Content Skipped 00:02:45.813 ================= 00:02:45.813 00:02:45.813 apps: 00:02:45.813 dumpcap: explicitly disabled via build config 00:02:45.813 graph: explicitly disabled via build config 00:02:45.813 pdump: explicitly disabled via build config 00:02:45.813 proc-info: explicitly disabled via build config 00:02:45.813 test-acl: explicitly disabled via build config 00:02:45.813 test-bbdev: explicitly disabled via build config 00:02:45.813 test-cmdline: explicitly disabled via build config 00:02:45.813 test-compress-perf: explicitly disabled via build config 00:02:45.813 test-crypto-perf: explicitly disabled via build config 00:02:45.813 test-dma-perf: explicitly disabled via build config 00:02:45.813 test-eventdev: explicitly disabled via build config 00:02:45.813 test-fib: explicitly disabled via build config 00:02:45.813 test-flow-perf: explicitly disabled via build config 00:02:45.813 test-gpudev: explicitly disabled via build config 00:02:45.813 test-mldev: explicitly disabled via build config 00:02:45.813 test-pipeline: explicitly disabled via build config 00:02:45.813 test-pmd: explicitly disabled via build config 00:02:45.813 test-regex: explicitly disabled via build config 00:02:45.813 test-sad: explicitly disabled via build config 00:02:45.813 test-security-perf: explicitly disabled via build config 00:02:45.813 00:02:45.813 libs: 00:02:45.813 argparse: explicitly disabled via build config 00:02:45.813 metrics: explicitly disabled via build config 00:02:45.813 acl: explicitly disabled via build config 00:02:45.813 bbdev: explicitly disabled via build config 00:02:45.813 bitratestats: explicitly disabled via build config 00:02:45.813 bpf: explicitly disabled via build config 00:02:45.813 cfgfile: explicitly disabled via build config 00:02:45.813 distributor: explicitly disabled via build config 00:02:45.813 efd: explicitly disabled via build config 00:02:45.813 eventdev: explicitly disabled via build config 00:02:45.813 dispatcher: explicitly disabled via build config 00:02:45.813 gpudev: explicitly disabled via build config 00:02:45.813 gro: explicitly disabled via build config 00:02:45.813 gso: explicitly disabled via build config 00:02:45.813 ip_frag: explicitly disabled via build config 00:02:45.813 jobstats: explicitly disabled via build config 00:02:45.813 latencystats: explicitly disabled via build config 00:02:45.813 lpm: explicitly disabled via build config 00:02:45.813 member: explicitly disabled via build config 00:02:45.813 pcapng: explicitly disabled via build config 00:02:45.813 rawdev: explicitly disabled via build config 00:02:45.813 regexdev: explicitly disabled via build config 00:02:45.813 mldev: explicitly disabled via build config 00:02:45.813 rib: explicitly disabled via build config 00:02:45.814 sched: explicitly disabled via build config 00:02:45.814 stack: explicitly disabled via build config 00:02:45.814 ipsec: explicitly disabled via build config 00:02:45.814 pdcp: explicitly disabled via build config 00:02:45.814 fib: explicitly disabled via build config 00:02:45.814 port: explicitly disabled via build config 00:02:45.814 pdump: explicitly disabled via build config 00:02:45.814 table: explicitly disabled via build config 00:02:45.814 pipeline: explicitly disabled via build config 00:02:45.814 graph: explicitly disabled via build config 00:02:45.814 node: explicitly disabled via build config 00:02:45.814 00:02:45.814 drivers: 00:02:45.814 common/cpt: not in enabled drivers build config 00:02:45.814 common/dpaax: not in enabled drivers build config 00:02:45.814 common/iavf: not in enabled drivers build config 00:02:45.814 common/idpf: not in enabled drivers build config 00:02:45.814 common/ionic: not in enabled drivers build config 00:02:45.814 common/mvep: not in enabled drivers build config 00:02:45.814 common/octeontx: not in enabled drivers build config 00:02:45.814 bus/auxiliary: not in enabled drivers build config 00:02:45.814 bus/cdx: not in enabled drivers build config 00:02:45.814 bus/dpaa: not in enabled drivers build config 00:02:45.814 bus/fslmc: not in enabled drivers build config 00:02:45.814 bus/ifpga: not in enabled drivers build config 00:02:45.814 bus/platform: not in enabled drivers build config 00:02:45.814 bus/uacce: not in enabled drivers build config 00:02:45.814 bus/vmbus: not in enabled drivers build config 00:02:45.814 common/cnxk: not in enabled drivers build config 00:02:45.814 common/mlx5: not in enabled drivers build config 00:02:45.814 common/nfp: not in enabled drivers build config 00:02:45.814 common/nitrox: not in enabled drivers build config 00:02:45.814 common/qat: not in enabled drivers build config 00:02:45.814 common/sfc_efx: not in enabled drivers build config 00:02:45.814 mempool/bucket: not in enabled drivers build config 00:02:45.814 mempool/cnxk: not in enabled drivers build config 00:02:45.814 mempool/dpaa: not in enabled drivers build config 00:02:45.814 mempool/dpaa2: not in enabled drivers build config 00:02:45.814 mempool/octeontx: not in enabled drivers build config 00:02:45.814 mempool/stack: not in enabled drivers build config 00:02:45.814 dma/cnxk: not in enabled drivers build config 00:02:45.814 dma/dpaa: not in enabled drivers build config 00:02:45.814 dma/dpaa2: not in enabled drivers build config 00:02:45.814 dma/hisilicon: not in enabled drivers build config 00:02:45.814 dma/idxd: not in enabled drivers build config 00:02:45.814 dma/ioat: not in enabled drivers build config 00:02:45.814 dma/skeleton: not in enabled drivers build config 00:02:45.814 net/af_packet: not in enabled drivers build config 00:02:45.814 net/af_xdp: not in enabled drivers build config 00:02:45.814 net/ark: not in enabled drivers build config 00:02:45.814 net/atlantic: not in enabled drivers build config 00:02:45.814 net/avp: not in enabled drivers build config 00:02:45.814 net/axgbe: not in enabled drivers build config 00:02:45.814 net/bnx2x: not in enabled drivers build config 00:02:45.814 net/bnxt: not in enabled drivers build config 00:02:45.814 net/bonding: not in enabled drivers build config 00:02:45.814 net/cnxk: not in enabled drivers build config 00:02:45.814 net/cpfl: not in enabled drivers build config 00:02:45.814 net/cxgbe: not in enabled drivers build config 00:02:45.814 net/dpaa: not in enabled drivers build config 00:02:45.814 net/dpaa2: not in enabled drivers build config 00:02:45.814 net/e1000: not in enabled drivers build config 00:02:45.814 net/ena: not in enabled drivers build config 00:02:45.814 net/enetc: not in enabled drivers build config 00:02:45.814 net/enetfec: not in enabled drivers build config 00:02:45.814 net/enic: not in enabled drivers build config 00:02:45.814 net/failsafe: not in enabled drivers build config 00:02:45.814 net/fm10k: not in enabled drivers build config 00:02:45.814 net/gve: not in enabled drivers build config 00:02:45.814 net/hinic: not in enabled drivers build config 00:02:45.814 net/hns3: not in enabled drivers build config 00:02:45.814 net/i40e: not in enabled drivers build config 00:02:45.814 net/iavf: not in enabled drivers build config 00:02:45.814 net/ice: not in enabled drivers build config 00:02:45.814 net/idpf: not in enabled drivers build config 00:02:45.814 net/igc: not in enabled drivers build config 00:02:45.814 net/ionic: not in enabled drivers build config 00:02:45.814 net/ipn3ke: not in enabled drivers build config 00:02:45.814 net/ixgbe: not in enabled drivers build config 00:02:45.814 net/mana: not in enabled drivers build config 00:02:45.814 net/memif: not in enabled drivers build config 00:02:45.814 net/mlx4: not in enabled drivers build config 00:02:45.814 net/mlx5: not in enabled drivers build config 00:02:45.814 net/mvneta: not in enabled drivers build config 00:02:45.814 net/mvpp2: not in enabled drivers build config 00:02:45.814 net/netvsc: not in enabled drivers build config 00:02:45.814 net/nfb: not in enabled drivers build config 00:02:45.814 net/nfp: not in enabled drivers build config 00:02:45.814 net/ngbe: not in enabled drivers build config 00:02:45.814 net/null: not in enabled drivers build config 00:02:45.814 net/octeontx: not in enabled drivers build config 00:02:45.814 net/octeon_ep: not in enabled drivers build config 00:02:45.814 net/pcap: not in enabled drivers build config 00:02:45.814 net/pfe: not in enabled drivers build config 00:02:45.814 net/qede: not in enabled drivers build config 00:02:45.814 net/ring: not in enabled drivers build config 00:02:45.814 net/sfc: not in enabled drivers build config 00:02:45.814 net/softnic: not in enabled drivers build config 00:02:45.814 net/tap: not in enabled drivers build config 00:02:45.814 net/thunderx: not in enabled drivers build config 00:02:45.814 net/txgbe: not in enabled drivers build config 00:02:45.814 net/vdev_netvsc: not in enabled drivers build config 00:02:45.814 net/vhost: not in enabled drivers build config 00:02:45.814 net/virtio: not in enabled drivers build config 00:02:45.814 net/vmxnet3: not in enabled drivers build config 00:02:45.814 raw/*: missing internal dependency, "rawdev" 00:02:45.814 crypto/armv8: not in enabled drivers build config 00:02:45.814 crypto/bcmfs: not in enabled drivers build config 00:02:45.814 crypto/caam_jr: not in enabled drivers build config 00:02:45.814 crypto/ccp: not in enabled drivers build config 00:02:45.814 crypto/cnxk: not in enabled drivers build config 00:02:45.814 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.814 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.814 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.814 crypto/mlx5: not in enabled drivers build config 00:02:45.814 crypto/mvsam: not in enabled drivers build config 00:02:45.814 crypto/nitrox: not in enabled drivers build config 00:02:45.814 crypto/null: not in enabled drivers build config 00:02:45.814 crypto/octeontx: not in enabled drivers build config 00:02:45.814 crypto/openssl: not in enabled drivers build config 00:02:45.814 crypto/scheduler: not in enabled drivers build config 00:02:45.814 crypto/uadk: not in enabled drivers build config 00:02:45.814 crypto/virtio: not in enabled drivers build config 00:02:45.814 compress/isal: not in enabled drivers build config 00:02:45.814 compress/mlx5: not in enabled drivers build config 00:02:45.814 compress/nitrox: not in enabled drivers build config 00:02:45.814 compress/octeontx: not in enabled drivers build config 00:02:45.814 compress/zlib: not in enabled drivers build config 00:02:45.814 regex/*: missing internal dependency, "regexdev" 00:02:45.814 ml/*: missing internal dependency, "mldev" 00:02:45.814 vdpa/ifc: not in enabled drivers build config 00:02:45.814 vdpa/mlx5: not in enabled drivers build config 00:02:45.814 vdpa/nfp: not in enabled drivers build config 00:02:45.814 vdpa/sfc: not in enabled drivers build config 00:02:45.814 event/*: missing internal dependency, "eventdev" 00:02:45.814 baseband/*: missing internal dependency, "bbdev" 00:02:45.814 gpu/*: missing internal dependency, "gpudev" 00:02:45.814 00:02:45.814 00:02:46.380 Build targets in project: 85 00:02:46.380 00:02:46.380 DPDK 24.03.0 00:02:46.380 00:02:46.380 User defined options 00:02:46.380 buildtype : debug 00:02:46.380 default_library : shared 00:02:46.380 libdir : lib 00:02:46.380 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.380 b_sanitize : address 00:02:46.380 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:46.380 c_link_args : 00:02:46.380 cpu_instruction_set: native 00:02:46.380 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.380 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.380 enable_docs : false 00:02:46.380 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.380 enable_kmods : false 00:02:46.380 max_lcores : 128 00:02:46.380 tests : false 00:02:46.380 00:02:46.380 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.638 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.896 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.896 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.896 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.896 [4/268] Linking static target lib/librte_kvargs.a 00:02:46.896 [5/268] Linking static target lib/librte_log.a 00:02:46.896 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.491 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.491 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.491 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.491 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.748 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:47.748 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:47.749 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:47.749 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.749 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.749 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.007 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.007 [18/268] Linking static target lib/librte_telemetry.a 00:02:48.007 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.007 [20/268] Linking target lib/librte_log.so.24.1 00:02:48.266 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.266 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:48.524 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:48.524 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:48.524 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.524 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:48.524 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.524 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.782 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.782 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:48.782 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.782 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.040 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:49.040 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.040 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.299 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.299 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.299 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.299 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.299 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.558 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.558 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.558 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.558 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.816 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:50.075 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:50.075 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.075 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:50.333 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.333 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.333 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.333 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.333 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.333 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.591 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.849 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:51.108 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:51.108 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:51.108 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:51.108 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:51.108 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:51.367 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:51.367 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:51.367 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:51.367 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:51.624 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:51.883 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:51.883 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:51.883 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:51.883 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:52.140 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.140 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:52.140 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:52.140 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.140 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.398 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:52.398 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:52.398 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:52.657 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:52.657 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:52.657 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:52.657 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:52.657 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:52.916 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:52.916 [85/268] Linking static target lib/librte_eal.a 00:02:53.175 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.175 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.175 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:53.433 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.433 [90/268] Linking static target lib/librte_ring.a 00:02:53.433 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.433 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:53.433 [93/268] Linking static target lib/librte_mempool.a 00:02:53.692 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.692 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.692 [96/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.692 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.950 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.950 [99/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:53.950 [100/268] Linking static target lib/librte_rcu.a 00:02:54.209 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.209 [102/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:54.209 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:54.467 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.467 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:54.467 [106/268] Linking static target lib/librte_meter.a 00:02:54.467 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:54.467 [108/268] Linking static target lib/librte_mbuf.a 00:02:54.467 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.726 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:54.726 [111/268] Linking static target lib/librte_net.a 00:02:54.726 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.984 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.984 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:54.984 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.984 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.241 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.241 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:55.500 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.758 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.015 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.015 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.015 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.581 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.839 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.839 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.839 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:56.839 [128/268] Linking static target lib/librte_pci.a 00:02:56.839 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.839 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.839 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.839 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.839 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.839 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.098 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.098 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.098 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.098 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.098 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.098 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.098 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.098 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.098 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.356 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.614 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.614 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.614 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:57.872 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.872 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.872 [150/268] Linking static target lib/librte_ethdev.a 00:02:57.872 [151/268] Linking static target lib/librte_cmdline.a 00:02:57.872 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.130 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:58.130 [154/268] Linking static target lib/librte_timer.a 00:02:58.388 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.388 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:58.388 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.646 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.646 [159/268] Linking static target lib/librte_hash.a 00:02:58.646 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.646 [161/268] Linking static target lib/librte_compressdev.a 00:02:58.646 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.916 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.916 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.916 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.175 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.432 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.432 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.432 [169/268] Linking static target lib/librte_dmadev.a 00:02:59.432 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.690 [171/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.690 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.690 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.690 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.690 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:00.255 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.255 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.255 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.255 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.255 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.255 [181/268] Linking static target lib/librte_cryptodev.a 00:03:00.255 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.513 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.513 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:01.079 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.079 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.079 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:01.079 [188/268] Linking static target lib/librte_reorder.a 00:03:01.079 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.079 [190/268] Linking static target lib/librte_power.a 00:03:01.338 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.338 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.338 [193/268] Linking static target lib/librte_security.a 00:03:01.596 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:01.596 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.162 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.421 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.421 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:02.421 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.421 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.680 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.939 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.939 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.939 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:03.197 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:03.197 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:03.197 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:03.197 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.456 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:03.456 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.715 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.715 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.715 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.715 [215/268] Linking static target drivers/librte_bus_pci.a 00:03:03.715 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.715 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.715 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.715 [219/268] Linking static target drivers/librte_bus_vdev.a 00:03:03.991 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.991 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.249 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.249 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.249 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.249 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:04.249 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.249 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.816 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.075 [229/268] Linking target lib/librte_eal.so.24.1 00:03:05.075 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.075 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:05.075 [232/268] Linking target lib/librte_pci.so.24.1 00:03:05.075 [233/268] Linking target lib/librte_meter.so.24.1 00:03:05.075 [234/268] Linking target lib/librte_ring.so.24.1 00:03:05.075 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:05.075 [236/268] Linking target lib/librte_timer.so.24.1 00:03:05.333 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:05.333 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:05.333 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:05.333 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:05.333 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:05.333 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:05.333 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:05.333 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:05.333 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:05.591 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:05.591 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:05.591 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:05.591 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:05.591 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:05.849 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:05.849 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:05.849 [253/268] Linking target lib/librte_net.so.24.1 00:03:05.849 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:05.849 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:05.849 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:05.849 [257/268] Linking target lib/librte_security.so.24.1 00:03:05.849 [258/268] Linking target lib/librte_hash.so.24.1 00:03:05.849 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:05.849 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.151 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:06.151 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:06.151 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:06.151 [264/268] Linking target lib/librte_power.so.24.1 00:03:09.436 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.436 [266/268] Linking static target lib/librte_vhost.a 00:03:11.337 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.337 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:11.337 INFO: autodetecting backend as ninja 00:03:11.337 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:33.325 CC lib/ut_mock/mock.o 00:03:33.325 CC lib/log/log.o 00:03:33.325 CC lib/log/log_flags.o 00:03:33.325 CC lib/log/log_deprecated.o 00:03:33.325 CC lib/ut/ut.o 00:03:33.325 LIB libspdk_ut_mock.a 00:03:33.325 LIB libspdk_ut.a 00:03:33.325 SO libspdk_ut_mock.so.6.0 00:03:33.325 LIB libspdk_log.a 00:03:33.325 SO libspdk_ut.so.2.0 00:03:33.325 SO libspdk_log.so.7.1 00:03:33.325 SYMLINK libspdk_ut_mock.so 00:03:33.325 SYMLINK libspdk_ut.so 00:03:33.325 SYMLINK libspdk_log.so 00:03:33.325 CXX lib/trace_parser/trace.o 00:03:33.325 CC lib/ioat/ioat.o 00:03:33.325 CC lib/util/base64.o 00:03:33.325 CC lib/util/bit_array.o 00:03:33.325 CC lib/util/crc32.o 00:03:33.325 CC lib/util/cpuset.o 00:03:33.325 CC lib/util/crc16.o 00:03:33.325 CC lib/util/crc32c.o 00:03:33.325 CC lib/dma/dma.o 00:03:33.325 CC lib/vfio_user/host/vfio_user_pci.o 00:03:33.325 CC lib/util/crc32_ieee.o 00:03:33.325 CC lib/vfio_user/host/vfio_user.o 00:03:33.325 CC lib/util/crc64.o 00:03:33.325 CC lib/util/dif.o 00:03:33.325 LIB libspdk_dma.a 00:03:33.325 CC lib/util/fd.o 00:03:33.325 SO libspdk_dma.so.5.0 00:03:33.325 CC lib/util/fd_group.o 00:03:33.325 CC lib/util/file.o 00:03:33.325 SYMLINK libspdk_dma.so 00:03:33.325 CC lib/util/hexlify.o 00:03:33.325 CC lib/util/iov.o 00:03:33.325 CC lib/util/math.o 00:03:33.325 CC lib/util/net.o 00:03:33.325 LIB libspdk_ioat.a 00:03:33.325 LIB libspdk_vfio_user.a 00:03:33.325 SO libspdk_ioat.so.7.0 00:03:33.325 SO libspdk_vfio_user.so.5.0 00:03:33.325 SYMLINK libspdk_ioat.so 00:03:33.325 CC lib/util/pipe.o 00:03:33.325 CC lib/util/strerror_tls.o 00:03:33.325 CC lib/util/string.o 00:03:33.325 SYMLINK libspdk_vfio_user.so 00:03:33.325 CC lib/util/uuid.o 00:03:33.325 CC lib/util/xor.o 00:03:33.325 CC lib/util/zipf.o 00:03:33.325 CC lib/util/md5.o 00:03:33.325 LIB libspdk_util.a 00:03:33.325 LIB libspdk_trace_parser.a 00:03:33.325 SO libspdk_util.so.10.1 00:03:33.325 SO libspdk_trace_parser.so.6.0 00:03:33.326 SYMLINK libspdk_trace_parser.so 00:03:33.326 SYMLINK libspdk_util.so 00:03:33.326 CC lib/vmd/vmd.o 00:03:33.326 CC lib/vmd/led.o 00:03:33.326 CC lib/json/json_parse.o 00:03:33.326 CC lib/json/json_util.o 00:03:33.326 CC lib/json/json_write.o 00:03:33.326 CC lib/env_dpdk/env.o 00:03:33.326 CC lib/conf/conf.o 00:03:33.326 CC lib/env_dpdk/memory.o 00:03:33.326 CC lib/rdma_utils/rdma_utils.o 00:03:33.326 CC lib/idxd/idxd.o 00:03:33.326 CC lib/idxd/idxd_user.o 00:03:33.326 LIB libspdk_conf.a 00:03:33.326 CC lib/idxd/idxd_kernel.o 00:03:33.326 CC lib/env_dpdk/pci.o 00:03:33.326 SO libspdk_conf.so.6.0 00:03:33.326 LIB libspdk_rdma_utils.a 00:03:33.326 LIB libspdk_json.a 00:03:33.326 SYMLINK libspdk_conf.so 00:03:33.326 CC lib/env_dpdk/init.o 00:03:33.326 SO libspdk_rdma_utils.so.1.0 00:03:33.326 SO libspdk_json.so.6.0 00:03:33.326 SYMLINK libspdk_rdma_utils.so 00:03:33.326 CC lib/env_dpdk/threads.o 00:03:33.326 CC lib/env_dpdk/pci_ioat.o 00:03:33.326 SYMLINK libspdk_json.so 00:03:33.326 CC lib/env_dpdk/pci_virtio.o 00:03:33.326 CC lib/env_dpdk/pci_vmd.o 00:03:33.326 CC lib/env_dpdk/pci_idxd.o 00:03:33.326 CC lib/env_dpdk/pci_event.o 00:03:33.326 CC lib/env_dpdk/sigbus_handler.o 00:03:33.326 CC lib/env_dpdk/pci_dpdk.o 00:03:33.326 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:33.326 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:33.326 LIB libspdk_idxd.a 00:03:33.326 LIB libspdk_vmd.a 00:03:33.326 SO libspdk_idxd.so.12.1 00:03:33.326 SO libspdk_vmd.so.6.0 00:03:33.326 SYMLINK libspdk_idxd.so 00:03:33.326 CC lib/jsonrpc/jsonrpc_server.o 00:03:33.326 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:33.326 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:33.326 CC lib/jsonrpc/jsonrpc_client.o 00:03:33.326 CC lib/rdma_provider/common.o 00:03:33.326 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:33.326 SYMLINK libspdk_vmd.so 00:03:33.326 LIB libspdk_rdma_provider.a 00:03:33.326 SO libspdk_rdma_provider.so.7.0 00:03:33.326 LIB libspdk_jsonrpc.a 00:03:33.326 SO libspdk_jsonrpc.so.6.0 00:03:33.326 SYMLINK libspdk_rdma_provider.so 00:03:33.326 SYMLINK libspdk_jsonrpc.so 00:03:33.326 CC lib/rpc/rpc.o 00:03:33.585 LIB libspdk_env_dpdk.a 00:03:33.585 LIB libspdk_rpc.a 00:03:33.585 SO libspdk_env_dpdk.so.15.1 00:03:33.585 SO libspdk_rpc.so.6.0 00:03:33.585 SYMLINK libspdk_rpc.so 00:03:33.843 SYMLINK libspdk_env_dpdk.so 00:03:33.843 CC lib/notify/notify.o 00:03:33.843 CC lib/notify/notify_rpc.o 00:03:33.843 CC lib/trace/trace_flags.o 00:03:33.843 CC lib/trace/trace.o 00:03:33.843 CC lib/trace/trace_rpc.o 00:03:33.843 CC lib/keyring/keyring.o 00:03:33.843 CC lib/keyring/keyring_rpc.o 00:03:34.101 LIB libspdk_notify.a 00:03:34.101 SO libspdk_notify.so.6.0 00:03:34.101 SYMLINK libspdk_notify.so 00:03:34.359 LIB libspdk_trace.a 00:03:34.359 LIB libspdk_keyring.a 00:03:34.359 SO libspdk_trace.so.11.0 00:03:34.359 SO libspdk_keyring.so.2.0 00:03:34.359 SYMLINK libspdk_keyring.so 00:03:34.359 SYMLINK libspdk_trace.so 00:03:34.616 CC lib/sock/sock.o 00:03:34.616 CC lib/sock/sock_rpc.o 00:03:34.616 CC lib/thread/thread.o 00:03:34.616 CC lib/thread/iobuf.o 00:03:35.192 LIB libspdk_sock.a 00:03:35.192 SO libspdk_sock.so.10.0 00:03:35.451 SYMLINK libspdk_sock.so 00:03:35.709 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:35.709 CC lib/nvme/nvme_ctrlr.o 00:03:35.709 CC lib/nvme/nvme_fabric.o 00:03:35.709 CC lib/nvme/nvme_ns_cmd.o 00:03:35.709 CC lib/nvme/nvme_ns.o 00:03:35.709 CC lib/nvme/nvme_pcie_common.o 00:03:35.709 CC lib/nvme/nvme_qpair.o 00:03:35.709 CC lib/nvme/nvme_pcie.o 00:03:35.709 CC lib/nvme/nvme.o 00:03:36.643 CC lib/nvme/nvme_quirks.o 00:03:36.643 CC lib/nvme/nvme_transport.o 00:03:36.643 CC lib/nvme/nvme_discovery.o 00:03:36.643 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:36.643 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:36.643 LIB libspdk_thread.a 00:03:36.643 SO libspdk_thread.so.11.0 00:03:36.901 CC lib/nvme/nvme_tcp.o 00:03:36.901 CC lib/nvme/nvme_opal.o 00:03:36.901 SYMLINK libspdk_thread.so 00:03:36.901 CC lib/accel/accel.o 00:03:37.158 CC lib/accel/accel_rpc.o 00:03:37.158 CC lib/blob/blobstore.o 00:03:37.416 CC lib/blob/request.o 00:03:37.416 CC lib/blob/zeroes.o 00:03:37.416 CC lib/accel/accel_sw.o 00:03:37.416 CC lib/blob/blob_bs_dev.o 00:03:37.675 CC lib/init/json_config.o 00:03:37.675 CC lib/nvme/nvme_io_msg.o 00:03:37.675 CC lib/fsdev/fsdev.o 00:03:37.675 CC lib/virtio/virtio.o 00:03:37.675 CC lib/fsdev/fsdev_io.o 00:03:37.934 CC lib/init/subsystem.o 00:03:37.934 CC lib/init/subsystem_rpc.o 00:03:37.934 CC lib/init/rpc.o 00:03:37.934 CC lib/fsdev/fsdev_rpc.o 00:03:38.191 CC lib/virtio/virtio_vhost_user.o 00:03:38.191 CC lib/nvme/nvme_poll_group.o 00:03:38.191 CC lib/virtio/virtio_vfio_user.o 00:03:38.191 LIB libspdk_init.a 00:03:38.191 SO libspdk_init.so.6.0 00:03:38.449 CC lib/virtio/virtio_pci.o 00:03:38.449 LIB libspdk_accel.a 00:03:38.449 SYMLINK libspdk_init.so 00:03:38.449 CC lib/nvme/nvme_zns.o 00:03:38.449 SO libspdk_accel.so.16.0 00:03:38.449 CC lib/nvme/nvme_stubs.o 00:03:38.449 LIB libspdk_fsdev.a 00:03:38.449 CC lib/nvme/nvme_auth.o 00:03:38.449 SO libspdk_fsdev.so.2.0 00:03:38.449 SYMLINK libspdk_accel.so 00:03:38.707 SYMLINK libspdk_fsdev.so 00:03:38.707 CC lib/nvme/nvme_cuse.o 00:03:38.707 CC lib/event/app.o 00:03:38.707 CC lib/bdev/bdev.o 00:03:38.707 LIB libspdk_virtio.a 00:03:38.965 SO libspdk_virtio.so.7.0 00:03:38.965 CC lib/nvme/nvme_rdma.o 00:03:38.965 SYMLINK libspdk_virtio.so 00:03:38.965 CC lib/bdev/bdev_rpc.o 00:03:38.965 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:38.965 CC lib/bdev/bdev_zone.o 00:03:39.221 CC lib/bdev/part.o 00:03:39.222 CC lib/event/reactor.o 00:03:39.222 CC lib/bdev/scsi_nvme.o 00:03:39.222 CC lib/event/log_rpc.o 00:03:39.479 CC lib/event/app_rpc.o 00:03:39.479 CC lib/event/scheduler_static.o 00:03:39.738 LIB libspdk_fuse_dispatcher.a 00:03:39.738 SO libspdk_fuse_dispatcher.so.1.0 00:03:39.738 LIB libspdk_event.a 00:03:39.738 SYMLINK libspdk_fuse_dispatcher.so 00:03:39.995 SO libspdk_event.so.14.0 00:03:39.995 SYMLINK libspdk_event.so 00:03:40.562 LIB libspdk_nvme.a 00:03:40.820 SO libspdk_nvme.so.15.0 00:03:41.078 SYMLINK libspdk_nvme.so 00:03:41.336 LIB libspdk_blob.a 00:03:41.600 SO libspdk_blob.so.11.0 00:03:41.600 SYMLINK libspdk_blob.so 00:03:41.869 CC lib/lvol/lvol.o 00:03:41.869 CC lib/blobfs/blobfs.o 00:03:41.869 CC lib/blobfs/tree.o 00:03:42.434 LIB libspdk_bdev.a 00:03:42.434 SO libspdk_bdev.so.17.0 00:03:42.434 SYMLINK libspdk_bdev.so 00:03:42.692 CC lib/nbd/nbd_rpc.o 00:03:42.692 CC lib/nbd/nbd.o 00:03:42.692 CC lib/scsi/dev.o 00:03:42.692 CC lib/ftl/ftl_core.o 00:03:42.692 CC lib/ftl/ftl_init.o 00:03:42.692 CC lib/scsi/lun.o 00:03:42.692 CC lib/ublk/ublk.o 00:03:42.692 CC lib/nvmf/ctrlr.o 00:03:42.950 CC lib/ublk/ublk_rpc.o 00:03:42.950 LIB libspdk_blobfs.a 00:03:42.950 SO libspdk_blobfs.so.10.0 00:03:43.207 CC lib/ftl/ftl_layout.o 00:03:43.208 LIB libspdk_lvol.a 00:03:43.208 CC lib/ftl/ftl_debug.o 00:03:43.208 SYMLINK libspdk_blobfs.so 00:03:43.208 CC lib/nvmf/ctrlr_discovery.o 00:03:43.208 SO libspdk_lvol.so.10.0 00:03:43.208 CC lib/scsi/port.o 00:03:43.208 CC lib/scsi/scsi.o 00:03:43.208 SYMLINK libspdk_lvol.so 00:03:43.208 CC lib/scsi/scsi_bdev.o 00:03:43.208 CC lib/ftl/ftl_io.o 00:03:43.465 LIB libspdk_nbd.a 00:03:43.466 CC lib/scsi/scsi_pr.o 00:03:43.466 CC lib/scsi/scsi_rpc.o 00:03:43.466 SO libspdk_nbd.so.7.0 00:03:43.466 CC lib/nvmf/ctrlr_bdev.o 00:03:43.466 SYMLINK libspdk_nbd.so 00:03:43.466 CC lib/scsi/task.o 00:03:43.466 CC lib/ftl/ftl_sb.o 00:03:43.466 CC lib/ftl/ftl_l2p.o 00:03:43.724 CC lib/ftl/ftl_l2p_flat.o 00:03:43.724 LIB libspdk_ublk.a 00:03:43.724 SO libspdk_ublk.so.3.0 00:03:43.724 CC lib/nvmf/subsystem.o 00:03:43.724 CC lib/nvmf/nvmf.o 00:03:43.724 CC lib/nvmf/nvmf_rpc.o 00:03:43.724 SYMLINK libspdk_ublk.so 00:03:43.724 CC lib/nvmf/transport.o 00:03:43.724 CC lib/nvmf/tcp.o 00:03:43.724 CC lib/ftl/ftl_nv_cache.o 00:03:43.724 CC lib/ftl/ftl_band.o 00:03:43.982 LIB libspdk_scsi.a 00:03:43.982 SO libspdk_scsi.so.9.0 00:03:43.982 SYMLINK libspdk_scsi.so 00:03:43.982 CC lib/nvmf/stubs.o 00:03:44.240 CC lib/nvmf/mdns_server.o 00:03:44.240 CC lib/ftl/ftl_band_ops.o 00:03:44.499 CC lib/ftl/ftl_writer.o 00:03:44.757 CC lib/ftl/ftl_rq.o 00:03:44.757 CC lib/iscsi/conn.o 00:03:44.757 CC lib/nvmf/rdma.o 00:03:44.757 CC lib/iscsi/init_grp.o 00:03:44.757 CC lib/nvmf/auth.o 00:03:45.015 CC lib/ftl/ftl_reloc.o 00:03:45.015 CC lib/vhost/vhost.o 00:03:45.015 CC lib/vhost/vhost_rpc.o 00:03:45.015 CC lib/iscsi/iscsi.o 00:03:45.015 CC lib/iscsi/param.o 00:03:45.273 CC lib/iscsi/portal_grp.o 00:03:45.273 CC lib/ftl/ftl_l2p_cache.o 00:03:45.531 CC lib/iscsi/tgt_node.o 00:03:45.531 CC lib/iscsi/iscsi_subsystem.o 00:03:45.531 CC lib/ftl/ftl_p2l.o 00:03:45.789 CC lib/vhost/vhost_scsi.o 00:03:45.789 CC lib/vhost/vhost_blk.o 00:03:46.047 CC lib/vhost/rte_vhost_user.o 00:03:46.047 CC lib/iscsi/iscsi_rpc.o 00:03:46.047 CC lib/iscsi/task.o 00:03:46.047 CC lib/ftl/ftl_p2l_log.o 00:03:46.047 CC lib/ftl/mngt/ftl_mngt.o 00:03:46.047 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:46.305 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:46.305 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:46.565 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:46.565 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:46.565 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:46.565 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:46.565 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:46.824 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:46.824 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:46.824 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:46.824 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:46.824 CC lib/ftl/utils/ftl_conf.o 00:03:47.082 LIB libspdk_iscsi.a 00:03:47.082 CC lib/ftl/utils/ftl_md.o 00:03:47.082 CC lib/ftl/utils/ftl_mempool.o 00:03:47.082 CC lib/ftl/utils/ftl_bitmap.o 00:03:47.082 SO libspdk_iscsi.so.8.0 00:03:47.082 CC lib/ftl/utils/ftl_property.o 00:03:47.082 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:47.082 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:47.082 LIB libspdk_vhost.a 00:03:47.082 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:47.340 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:47.340 SYMLINK libspdk_iscsi.so 00:03:47.340 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:47.340 SO libspdk_vhost.so.8.0 00:03:47.340 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:47.340 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:47.340 SYMLINK libspdk_vhost.so 00:03:47.340 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:47.340 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:47.340 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:47.340 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:47.340 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:47.645 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:47.645 CC lib/ftl/base/ftl_base_dev.o 00:03:47.645 CC lib/ftl/base/ftl_base_bdev.o 00:03:47.645 CC lib/ftl/ftl_trace.o 00:03:47.645 LIB libspdk_nvmf.a 00:03:47.645 SO libspdk_nvmf.so.20.0 00:03:47.902 LIB libspdk_ftl.a 00:03:47.902 SYMLINK libspdk_nvmf.so 00:03:48.159 SO libspdk_ftl.so.9.0 00:03:48.417 SYMLINK libspdk_ftl.so 00:03:48.676 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.934 CC module/sock/posix/posix.o 00:03:48.934 CC module/keyring/file/keyring.o 00:03:48.934 CC module/accel/ioat/accel_ioat.o 00:03:48.934 CC module/accel/dsa/accel_dsa.o 00:03:48.934 CC module/accel/iaa/accel_iaa.o 00:03:48.934 CC module/blob/bdev/blob_bdev.o 00:03:48.934 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.934 CC module/accel/error/accel_error.o 00:03:48.934 CC module/fsdev/aio/fsdev_aio.o 00:03:48.934 LIB libspdk_env_dpdk_rpc.a 00:03:48.934 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.934 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.934 CC module/keyring/file/keyring_rpc.o 00:03:48.934 CC module/accel/ioat/accel_ioat_rpc.o 00:03:49.191 LIB libspdk_scheduler_dynamic.a 00:03:49.191 CC module/accel/iaa/accel_iaa_rpc.o 00:03:49.191 CC module/accel/error/accel_error_rpc.o 00:03:49.191 SO libspdk_scheduler_dynamic.so.4.0 00:03:49.191 LIB libspdk_keyring_file.a 00:03:49.191 SYMLINK libspdk_scheduler_dynamic.so 00:03:49.191 CC module/accel/dsa/accel_dsa_rpc.o 00:03:49.191 LIB libspdk_blob_bdev.a 00:03:49.191 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:49.191 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:49.191 SO libspdk_keyring_file.so.2.0 00:03:49.191 LIB libspdk_accel_ioat.a 00:03:49.191 SO libspdk_blob_bdev.so.11.0 00:03:49.191 SO libspdk_accel_ioat.so.6.0 00:03:49.191 LIB libspdk_accel_iaa.a 00:03:49.191 LIB libspdk_accel_error.a 00:03:49.191 SYMLINK libspdk_keyring_file.so 00:03:49.191 SYMLINK libspdk_blob_bdev.so 00:03:49.191 SO libspdk_accel_error.so.2.0 00:03:49.191 SO libspdk_accel_iaa.so.3.0 00:03:49.191 CC module/fsdev/aio/linux_aio_mgr.o 00:03:49.191 SYMLINK libspdk_accel_ioat.so 00:03:49.449 LIB libspdk_accel_dsa.a 00:03:49.449 SYMLINK libspdk_accel_error.so 00:03:49.449 SYMLINK libspdk_accel_iaa.so 00:03:49.449 LIB libspdk_scheduler_dpdk_governor.a 00:03:49.449 SO libspdk_accel_dsa.so.5.0 00:03:49.449 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:49.449 SYMLINK libspdk_accel_dsa.so 00:03:49.449 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:49.449 CC module/keyring/linux/keyring.o 00:03:49.449 CC module/scheduler/gscheduler/gscheduler.o 00:03:49.707 CC module/bdev/error/vbdev_error.o 00:03:49.707 CC module/bdev/delay/vbdev_delay.o 00:03:49.707 CC module/bdev/gpt/gpt.o 00:03:49.707 CC module/bdev/lvol/vbdev_lvol.o 00:03:49.707 CC module/blobfs/bdev/blobfs_bdev.o 00:03:49.707 CC module/keyring/linux/keyring_rpc.o 00:03:49.707 LIB libspdk_scheduler_gscheduler.a 00:03:49.707 CC module/bdev/malloc/bdev_malloc.o 00:03:49.707 SO libspdk_scheduler_gscheduler.so.4.0 00:03:49.707 LIB libspdk_fsdev_aio.a 00:03:49.707 SO libspdk_fsdev_aio.so.1.0 00:03:49.707 SYMLINK libspdk_scheduler_gscheduler.so 00:03:49.707 LIB libspdk_keyring_linux.a 00:03:49.707 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.707 LIB libspdk_sock_posix.a 00:03:49.707 SO libspdk_keyring_linux.so.1.0 00:03:49.707 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.707 SO libspdk_sock_posix.so.6.0 00:03:49.965 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.965 SYMLINK libspdk_fsdev_aio.so 00:03:49.965 SYMLINK libspdk_keyring_linux.so 00:03:49.965 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.965 CC module/bdev/error/vbdev_error_rpc.o 00:03:49.965 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:49.965 SYMLINK libspdk_sock_posix.so 00:03:49.965 LIB libspdk_blobfs_bdev.a 00:03:49.965 SO libspdk_blobfs_bdev.so.6.0 00:03:50.223 LIB libspdk_bdev_error.a 00:03:50.223 CC module/bdev/null/bdev_null.o 00:03:50.223 SYMLINK libspdk_blobfs_bdev.so 00:03:50.223 CC module/bdev/nvme/bdev_nvme.o 00:03:50.223 SO libspdk_bdev_error.so.6.0 00:03:50.223 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:50.223 LIB libspdk_bdev_gpt.a 00:03:50.223 LIB libspdk_bdev_malloc.a 00:03:50.223 LIB libspdk_bdev_delay.a 00:03:50.223 SO libspdk_bdev_malloc.so.6.0 00:03:50.223 SO libspdk_bdev_gpt.so.6.0 00:03:50.223 SO libspdk_bdev_delay.so.6.0 00:03:50.223 SYMLINK libspdk_bdev_error.so 00:03:50.223 CC module/bdev/passthru/vbdev_passthru.o 00:03:50.223 CC module/bdev/null/bdev_null_rpc.o 00:03:50.223 SYMLINK libspdk_bdev_malloc.so 00:03:50.223 SYMLINK libspdk_bdev_delay.so 00:03:50.223 SYMLINK libspdk_bdev_gpt.so 00:03:50.223 CC module/bdev/nvme/nvme_rpc.o 00:03:50.481 LIB libspdk_bdev_lvol.a 00:03:50.481 CC module/bdev/split/vbdev_split.o 00:03:50.481 SO libspdk_bdev_lvol.so.6.0 00:03:50.481 CC module/bdev/raid/bdev_raid.o 00:03:50.481 LIB libspdk_bdev_null.a 00:03:50.481 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.481 SO libspdk_bdev_null.so.6.0 00:03:50.481 SYMLINK libspdk_bdev_lvol.so 00:03:50.481 SYMLINK libspdk_bdev_null.so 00:03:50.481 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:50.481 CC module/bdev/xnvme/bdev_xnvme.o 00:03:50.740 CC module/bdev/aio/bdev_aio.o 00:03:50.740 CC module/bdev/split/vbdev_split_rpc.o 00:03:50.740 CC module/bdev/ftl/bdev_ftl.o 00:03:50.740 LIB libspdk_bdev_passthru.a 00:03:50.740 CC module/bdev/iscsi/bdev_iscsi.o 00:03:50.740 SO libspdk_bdev_passthru.so.6.0 00:03:50.740 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:50.740 SYMLINK libspdk_bdev_passthru.so 00:03:50.740 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:50.998 LIB libspdk_bdev_split.a 00:03:50.998 SO libspdk_bdev_split.so.6.0 00:03:50.998 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:50.998 CC module/bdev/aio/bdev_aio_rpc.o 00:03:50.998 SYMLINK libspdk_bdev_split.so 00:03:50.998 CC module/bdev/nvme/bdev_mdns_client.o 00:03:50.998 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:50.998 LIB libspdk_bdev_zone_block.a 00:03:50.998 CC module/bdev/raid/bdev_raid_rpc.o 00:03:51.257 SO libspdk_bdev_zone_block.so.6.0 00:03:51.257 LIB libspdk_bdev_xnvme.a 00:03:51.257 LIB libspdk_bdev_aio.a 00:03:51.257 SO libspdk_bdev_xnvme.so.3.0 00:03:51.257 SO libspdk_bdev_aio.so.6.0 00:03:51.257 LIB libspdk_bdev_iscsi.a 00:03:51.257 SYMLINK libspdk_bdev_zone_block.so 00:03:51.257 CC module/bdev/nvme/vbdev_opal.o 00:03:51.257 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:51.257 SO libspdk_bdev_iscsi.so.6.0 00:03:51.257 SYMLINK libspdk_bdev_xnvme.so 00:03:51.257 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:51.257 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:51.257 SYMLINK libspdk_bdev_aio.so 00:03:51.257 CC module/bdev/raid/bdev_raid_sb.o 00:03:51.257 LIB libspdk_bdev_ftl.a 00:03:51.257 SYMLINK libspdk_bdev_iscsi.so 00:03:51.257 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:51.257 SO libspdk_bdev_ftl.so.6.0 00:03:51.257 CC module/bdev/raid/raid0.o 00:03:51.516 SYMLINK libspdk_bdev_ftl.so 00:03:51.516 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:51.516 CC module/bdev/raid/raid1.o 00:03:51.516 CC module/bdev/raid/concat.o 00:03:51.775 LIB libspdk_bdev_raid.a 00:03:52.033 SO libspdk_bdev_raid.so.6.0 00:03:52.033 LIB libspdk_bdev_virtio.a 00:03:52.033 SO libspdk_bdev_virtio.so.6.0 00:03:52.033 SYMLINK libspdk_bdev_raid.so 00:03:52.033 SYMLINK libspdk_bdev_virtio.so 00:03:53.408 LIB libspdk_bdev_nvme.a 00:03:53.408 SO libspdk_bdev_nvme.so.7.1 00:03:53.666 SYMLINK libspdk_bdev_nvme.so 00:03:54.232 CC module/event/subsystems/iobuf/iobuf.o 00:03:54.232 CC module/event/subsystems/keyring/keyring.o 00:03:54.232 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:54.232 CC module/event/subsystems/sock/sock.o 00:03:54.232 CC module/event/subsystems/vmd/vmd.o 00:03:54.232 CC module/event/subsystems/fsdev/fsdev.o 00:03:54.232 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:54.232 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:54.232 CC module/event/subsystems/scheduler/scheduler.o 00:03:54.232 LIB libspdk_event_fsdev.a 00:03:54.232 LIB libspdk_event_keyring.a 00:03:54.232 LIB libspdk_event_sock.a 00:03:54.232 LIB libspdk_event_vhost_blk.a 00:03:54.232 SO libspdk_event_fsdev.so.1.0 00:03:54.232 LIB libspdk_event_vmd.a 00:03:54.232 SO libspdk_event_keyring.so.1.0 00:03:54.232 LIB libspdk_event_iobuf.a 00:03:54.232 SO libspdk_event_sock.so.5.0 00:03:54.232 SO libspdk_event_vhost_blk.so.3.0 00:03:54.232 LIB libspdk_event_scheduler.a 00:03:54.232 SO libspdk_event_vmd.so.6.0 00:03:54.232 SO libspdk_event_iobuf.so.3.0 00:03:54.232 SYMLINK libspdk_event_fsdev.so 00:03:54.232 SO libspdk_event_scheduler.so.4.0 00:03:54.232 SYMLINK libspdk_event_keyring.so 00:03:54.232 SYMLINK libspdk_event_sock.so 00:03:54.232 SYMLINK libspdk_event_vhost_blk.so 00:03:54.502 SYMLINK libspdk_event_iobuf.so 00:03:54.502 SYMLINK libspdk_event_vmd.so 00:03:54.502 SYMLINK libspdk_event_scheduler.so 00:03:54.759 CC module/event/subsystems/accel/accel.o 00:03:54.759 LIB libspdk_event_accel.a 00:03:54.759 SO libspdk_event_accel.so.6.0 00:03:55.016 SYMLINK libspdk_event_accel.so 00:03:55.274 CC module/event/subsystems/bdev/bdev.o 00:03:55.532 LIB libspdk_event_bdev.a 00:03:55.532 SO libspdk_event_bdev.so.6.0 00:03:55.532 SYMLINK libspdk_event_bdev.so 00:03:55.790 CC module/event/subsystems/nbd/nbd.o 00:03:55.790 CC module/event/subsystems/ublk/ublk.o 00:03:55.790 CC module/event/subsystems/scsi/scsi.o 00:03:55.790 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:55.790 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.047 LIB libspdk_event_nbd.a 00:03:56.047 LIB libspdk_event_ublk.a 00:03:56.047 LIB libspdk_event_scsi.a 00:03:56.047 SO libspdk_event_nbd.so.6.0 00:03:56.047 SO libspdk_event_ublk.so.3.0 00:03:56.047 SO libspdk_event_scsi.so.6.0 00:03:56.047 SYMLINK libspdk_event_ublk.so 00:03:56.047 SYMLINK libspdk_event_nbd.so 00:03:56.047 SYMLINK libspdk_event_scsi.so 00:03:56.047 LIB libspdk_event_nvmf.a 00:03:56.047 SO libspdk_event_nvmf.so.6.0 00:03:56.305 SYMLINK libspdk_event_nvmf.so 00:03:56.305 CC module/event/subsystems/iscsi/iscsi.o 00:03:56.305 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:56.563 LIB libspdk_event_vhost_scsi.a 00:03:56.563 LIB libspdk_event_iscsi.a 00:03:56.563 SO libspdk_event_vhost_scsi.so.3.0 00:03:56.563 SO libspdk_event_iscsi.so.6.0 00:03:56.563 SYMLINK libspdk_event_vhost_scsi.so 00:03:56.563 SYMLINK libspdk_event_iscsi.so 00:03:56.820 SO libspdk.so.6.0 00:03:56.820 SYMLINK libspdk.so 00:03:57.078 TEST_HEADER include/spdk/accel.h 00:03:57.078 TEST_HEADER include/spdk/accel_module.h 00:03:57.078 TEST_HEADER include/spdk/assert.h 00:03:57.078 TEST_HEADER include/spdk/barrier.h 00:03:57.078 TEST_HEADER include/spdk/base64.h 00:03:57.078 TEST_HEADER include/spdk/bdev.h 00:03:57.078 CC app/trace_record/trace_record.o 00:03:57.078 TEST_HEADER include/spdk/bdev_module.h 00:03:57.078 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.078 CXX app/trace/trace.o 00:03:57.078 TEST_HEADER include/spdk/bit_array.h 00:03:57.078 TEST_HEADER include/spdk/bit_pool.h 00:03:57.078 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.078 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.078 TEST_HEADER include/spdk/blobfs.h 00:03:57.078 TEST_HEADER include/spdk/blob.h 00:03:57.078 TEST_HEADER include/spdk/conf.h 00:03:57.078 TEST_HEADER include/spdk/config.h 00:03:57.078 TEST_HEADER include/spdk/cpuset.h 00:03:57.078 TEST_HEADER include/spdk/crc16.h 00:03:57.078 TEST_HEADER include/spdk/crc32.h 00:03:57.078 TEST_HEADER include/spdk/crc64.h 00:03:57.078 TEST_HEADER include/spdk/dif.h 00:03:57.078 TEST_HEADER include/spdk/dma.h 00:03:57.078 TEST_HEADER include/spdk/endian.h 00:03:57.078 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.078 TEST_HEADER include/spdk/env.h 00:03:57.078 TEST_HEADER include/spdk/event.h 00:03:57.078 TEST_HEADER include/spdk/fd_group.h 00:03:57.078 TEST_HEADER include/spdk/fd.h 00:03:57.078 TEST_HEADER include/spdk/file.h 00:03:57.078 CC app/iscsi_tgt/iscsi_tgt.o 00:03:57.078 TEST_HEADER include/spdk/fsdev.h 00:03:57.078 TEST_HEADER include/spdk/fsdev_module.h 00:03:57.078 TEST_HEADER include/spdk/ftl.h 00:03:57.078 CC app/nvmf_tgt/nvmf_main.o 00:03:57.079 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:57.079 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.079 TEST_HEADER include/spdk/hexlify.h 00:03:57.079 TEST_HEADER include/spdk/histogram_data.h 00:03:57.079 TEST_HEADER include/spdk/idxd.h 00:03:57.079 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.079 TEST_HEADER include/spdk/init.h 00:03:57.079 CC examples/util/zipf/zipf.o 00:03:57.079 TEST_HEADER include/spdk/ioat.h 00:03:57.079 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.079 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.079 TEST_HEADER include/spdk/json.h 00:03:57.079 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.079 CC test/thread/poller_perf/poller_perf.o 00:03:57.079 TEST_HEADER include/spdk/keyring.h 00:03:57.079 TEST_HEADER include/spdk/keyring_module.h 00:03:57.079 TEST_HEADER include/spdk/likely.h 00:03:57.079 TEST_HEADER include/spdk/log.h 00:03:57.079 TEST_HEADER include/spdk/lvol.h 00:03:57.079 TEST_HEADER include/spdk/md5.h 00:03:57.079 TEST_HEADER include/spdk/memory.h 00:03:57.079 TEST_HEADER include/spdk/mmio.h 00:03:57.079 TEST_HEADER include/spdk/nbd.h 00:03:57.079 TEST_HEADER include/spdk/net.h 00:03:57.079 TEST_HEADER include/spdk/notify.h 00:03:57.079 TEST_HEADER include/spdk/nvme.h 00:03:57.079 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.079 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.079 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.079 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.079 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.079 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.079 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.079 TEST_HEADER include/spdk/nvmf.h 00:03:57.079 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.079 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.079 CC test/app/bdev_svc/bdev_svc.o 00:03:57.079 TEST_HEADER include/spdk/opal.h 00:03:57.079 CC test/dma/test_dma/test_dma.o 00:03:57.079 TEST_HEADER include/spdk/opal_spec.h 00:03:57.079 TEST_HEADER include/spdk/pci_ids.h 00:03:57.079 TEST_HEADER include/spdk/pipe.h 00:03:57.079 TEST_HEADER include/spdk/queue.h 00:03:57.079 TEST_HEADER include/spdk/reduce.h 00:03:57.079 TEST_HEADER include/spdk/rpc.h 00:03:57.079 TEST_HEADER include/spdk/scheduler.h 00:03:57.079 TEST_HEADER include/spdk/scsi.h 00:03:57.336 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.336 TEST_HEADER include/spdk/sock.h 00:03:57.336 TEST_HEADER include/spdk/stdinc.h 00:03:57.336 TEST_HEADER include/spdk/string.h 00:03:57.336 TEST_HEADER include/spdk/thread.h 00:03:57.336 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.336 TEST_HEADER include/spdk/trace.h 00:03:57.336 TEST_HEADER include/spdk/trace_parser.h 00:03:57.336 TEST_HEADER include/spdk/tree.h 00:03:57.336 TEST_HEADER include/spdk/ublk.h 00:03:57.336 TEST_HEADER include/spdk/util.h 00:03:57.336 TEST_HEADER include/spdk/uuid.h 00:03:57.336 TEST_HEADER include/spdk/version.h 00:03:57.337 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.337 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.337 TEST_HEADER include/spdk/vhost.h 00:03:57.337 TEST_HEADER include/spdk/vmd.h 00:03:57.337 TEST_HEADER include/spdk/xor.h 00:03:57.337 TEST_HEADER include/spdk/zipf.h 00:03:57.337 CXX test/cpp_headers/accel.o 00:03:57.337 LINK zipf 00:03:57.337 LINK poller_perf 00:03:57.337 LINK nvmf_tgt 00:03:57.337 LINK iscsi_tgt 00:03:57.337 LINK spdk_trace_record 00:03:57.337 LINK bdev_svc 00:03:57.594 CXX test/cpp_headers/accel_module.o 00:03:57.594 LINK spdk_trace 00:03:57.594 CC examples/ioat/perf/perf.o 00:03:57.594 CC examples/vmd/lsvmd/lsvmd.o 00:03:57.594 CXX test/cpp_headers/assert.o 00:03:57.594 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.594 CC examples/idxd/perf/perf.o 00:03:57.851 LINK test_dma 00:03:57.851 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.851 CC examples/thread/thread/thread_ex.o 00:03:57.851 LINK lsvmd 00:03:57.851 CXX test/cpp_headers/barrier.o 00:03:57.851 LINK mem_callbacks 00:03:57.851 CC app/spdk_tgt/spdk_tgt.o 00:03:57.851 LINK interrupt_tgt 00:03:57.851 LINK ioat_perf 00:03:58.108 CC examples/vmd/led/led.o 00:03:58.108 CXX test/cpp_headers/base64.o 00:03:58.108 CC test/app/histogram_perf/histogram_perf.o 00:03:58.108 LINK thread 00:03:58.108 CC test/env/vtophys/vtophys.o 00:03:58.108 LINK spdk_tgt 00:03:58.108 LINK idxd_perf 00:03:58.108 CC examples/ioat/verify/verify.o 00:03:58.108 CC test/rpc_client/rpc_client_test.o 00:03:58.108 LINK led 00:03:58.108 CXX test/cpp_headers/bdev.o 00:03:58.365 LINK histogram_perf 00:03:58.365 LINK vtophys 00:03:58.365 CXX test/cpp_headers/bdev_module.o 00:03:58.365 CXX test/cpp_headers/bdev_zone.o 00:03:58.365 LINK rpc_client_test 00:03:58.365 LINK nvme_fuzz 00:03:58.365 CC app/spdk_lspci/spdk_lspci.o 00:03:58.365 LINK verify 00:03:58.623 CC examples/sock/hello_world/hello_sock.o 00:03:58.623 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:58.623 LINK spdk_lspci 00:03:58.623 CXX test/cpp_headers/bit_array.o 00:03:58.623 CC app/spdk_nvme_perf/perf.o 00:03:58.623 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:58.623 CC examples/accel/perf/accel_perf.o 00:03:58.623 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.623 CC test/app/jsoncat/jsoncat.o 00:03:58.623 LINK env_dpdk_post_init 00:03:58.881 CC examples/blob/hello_world/hello_blob.o 00:03:58.881 CXX test/cpp_headers/bit_pool.o 00:03:58.881 LINK hello_sock 00:03:58.881 CC examples/blob/cli/blobcli.o 00:03:58.881 LINK jsoncat 00:03:58.881 CXX test/cpp_headers/blob_bdev.o 00:03:58.881 CC test/env/memory/memory_ut.o 00:03:58.881 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.881 LINK hello_fsdev 00:03:59.139 CXX test/cpp_headers/blobfs.o 00:03:59.139 LINK hello_blob 00:03:59.139 CXX test/cpp_headers/blob.o 00:03:59.139 CXX test/cpp_headers/conf.o 00:03:59.397 LINK accel_perf 00:03:59.397 CC test/event/event_perf/event_perf.o 00:03:59.397 CC test/accel/dif/dif.o 00:03:59.397 CXX test/cpp_headers/config.o 00:03:59.397 CC test/blobfs/mkfs/mkfs.o 00:03:59.397 CXX test/cpp_headers/cpuset.o 00:03:59.397 LINK blobcli 00:03:59.397 CC test/event/reactor/reactor.o 00:03:59.397 CXX test/cpp_headers/crc16.o 00:03:59.655 LINK event_perf 00:03:59.655 LINK mkfs 00:03:59.655 LINK reactor 00:03:59.655 CXX test/cpp_headers/crc32.o 00:03:59.655 LINK spdk_nvme_perf 00:03:59.914 CC test/event/reactor_perf/reactor_perf.o 00:03:59.914 CC test/event/app_repeat/app_repeat.o 00:03:59.914 CC examples/nvme/hello_world/hello_world.o 00:03:59.914 CXX test/cpp_headers/crc64.o 00:03:59.914 CC examples/nvme/reconnect/reconnect.o 00:03:59.914 LINK reactor_perf 00:03:59.914 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.914 LINK app_repeat 00:03:59.914 CC app/spdk_nvme_identify/identify.o 00:03:59.914 CXX test/cpp_headers/dif.o 00:04:00.174 CXX test/cpp_headers/dma.o 00:04:00.174 LINK hello_world 00:04:00.174 CXX test/cpp_headers/endian.o 00:04:00.432 CC test/event/scheduler/scheduler.o 00:04:00.432 LINK dif 00:04:00.432 LINK reconnect 00:04:00.432 LINK memory_ut 00:04:00.432 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.432 CXX test/cpp_headers/env_dpdk.o 00:04:00.432 CXX test/cpp_headers/env.o 00:04:00.432 CXX test/cpp_headers/event.o 00:04:00.432 CC test/lvol/esnap/esnap.o 00:04:00.690 LINK scheduler 00:04:00.690 LINK nvme_manage 00:04:00.690 CC test/env/pci/pci_ut.o 00:04:00.690 LINK hello_bdev 00:04:00.690 CXX test/cpp_headers/fd_group.o 00:04:00.690 CXX test/cpp_headers/fd.o 00:04:00.947 CC test/nvme/aer/aer.o 00:04:00.947 LINK iscsi_fuzz 00:04:00.947 CC examples/nvme/arbitration/arbitration.o 00:04:00.947 CC test/bdev/bdevio/bdevio.o 00:04:00.947 CXX test/cpp_headers/file.o 00:04:00.947 CC test/nvme/reset/reset.o 00:04:00.947 LINK spdk_nvme_identify 00:04:00.947 CC examples/bdev/bdevperf/bdevperf.o 00:04:01.206 CXX test/cpp_headers/fsdev.o 00:04:01.206 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.206 LINK pci_ut 00:04:01.206 LINK aer 00:04:01.206 LINK arbitration 00:04:01.206 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.206 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.465 LINK reset 00:04:01.465 CXX test/cpp_headers/fsdev_module.o 00:04:01.465 LINK bdevio 00:04:01.465 CC test/nvme/sgl/sgl.o 00:04:01.465 CXX test/cpp_headers/ftl.o 00:04:01.465 LINK spdk_nvme_discover 00:04:01.465 CC examples/nvme/hotplug/hotplug.o 00:04:01.723 CC test/nvme/e2edp/nvme_dp.o 00:04:01.723 CC test/nvme/overhead/overhead.o 00:04:01.723 CXX test/cpp_headers/fuse_dispatcher.o 00:04:01.723 CC test/nvme/err_injection/err_injection.o 00:04:01.723 LINK sgl 00:04:01.723 CC app/spdk_top/spdk_top.o 00:04:01.723 LINK vhost_fuzz 00:04:01.983 LINK hotplug 00:04:01.983 CXX test/cpp_headers/gpt_spec.o 00:04:01.983 LINK err_injection 00:04:01.983 LINK nvme_dp 00:04:01.983 CXX test/cpp_headers/hexlify.o 00:04:01.983 LINK overhead 00:04:01.983 CC test/app/stub/stub.o 00:04:01.983 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:02.242 LINK bdevperf 00:04:02.242 CXX test/cpp_headers/histogram_data.o 00:04:02.242 CXX test/cpp_headers/idxd.o 00:04:02.242 CC examples/nvme/abort/abort.o 00:04:02.242 CXX test/cpp_headers/idxd_spec.o 00:04:02.242 CC test/nvme/startup/startup.o 00:04:02.242 LINK stub 00:04:02.242 LINK cmb_copy 00:04:02.242 CXX test/cpp_headers/init.o 00:04:02.500 CXX test/cpp_headers/ioat.o 00:04:02.500 LINK startup 00:04:02.500 CC test/nvme/reserve/reserve.o 00:04:02.500 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:02.500 CXX test/cpp_headers/ioat_spec.o 00:04:02.500 CC app/vhost/vhost.o 00:04:02.759 CC test/nvme/simple_copy/simple_copy.o 00:04:02.759 LINK abort 00:04:02.759 LINK pmr_persistence 00:04:02.759 CXX test/cpp_headers/iscsi_spec.o 00:04:02.759 LINK reserve 00:04:02.759 CC test/nvme/connect_stress/connect_stress.o 00:04:02.759 CC app/spdk_dd/spdk_dd.o 00:04:02.759 LINK vhost 00:04:02.759 CXX test/cpp_headers/json.o 00:04:03.017 CC test/nvme/boot_partition/boot_partition.o 00:04:03.017 LINK simple_copy 00:04:03.017 LINK connect_stress 00:04:03.017 LINK spdk_top 00:04:03.017 CXX test/cpp_headers/jsonrpc.o 00:04:03.017 CC app/fio/nvme/fio_plugin.o 00:04:03.017 CC examples/nvmf/nvmf/nvmf.o 00:04:03.017 LINK boot_partition 00:04:03.017 CXX test/cpp_headers/keyring.o 00:04:03.276 LINK spdk_dd 00:04:03.276 CC app/fio/bdev/fio_plugin.o 00:04:03.276 CC test/nvme/compliance/nvme_compliance.o 00:04:03.276 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.276 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.276 CXX test/cpp_headers/keyring_module.o 00:04:03.276 CC test/nvme/fdp/fdp.o 00:04:03.534 LINK nvmf 00:04:03.534 CC test/nvme/cuse/cuse.o 00:04:03.534 LINK fused_ordering 00:04:03.534 CXX test/cpp_headers/likely.o 00:04:03.534 LINK doorbell_aers 00:04:03.534 CXX test/cpp_headers/log.o 00:04:03.534 CXX test/cpp_headers/lvol.o 00:04:03.534 LINK nvme_compliance 00:04:03.792 CXX test/cpp_headers/md5.o 00:04:03.792 CXX test/cpp_headers/memory.o 00:04:03.792 LINK spdk_nvme 00:04:03.792 LINK fdp 00:04:03.792 CXX test/cpp_headers/mmio.o 00:04:03.792 CXX test/cpp_headers/nbd.o 00:04:03.792 LINK spdk_bdev 00:04:03.792 CXX test/cpp_headers/net.o 00:04:03.792 CXX test/cpp_headers/notify.o 00:04:03.792 CXX test/cpp_headers/nvme.o 00:04:03.792 CXX test/cpp_headers/nvme_intel.o 00:04:04.051 CXX test/cpp_headers/nvme_ocssd.o 00:04:04.051 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:04.051 CXX test/cpp_headers/nvme_spec.o 00:04:04.051 CXX test/cpp_headers/nvme_zns.o 00:04:04.051 CXX test/cpp_headers/nvmf_cmd.o 00:04:04.051 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:04.051 CXX test/cpp_headers/nvmf.o 00:04:04.051 CXX test/cpp_headers/nvmf_spec.o 00:04:04.051 CXX test/cpp_headers/nvmf_transport.o 00:04:04.051 CXX test/cpp_headers/opal.o 00:04:04.309 CXX test/cpp_headers/opal_spec.o 00:04:04.309 CXX test/cpp_headers/pci_ids.o 00:04:04.309 CXX test/cpp_headers/pipe.o 00:04:04.309 CXX test/cpp_headers/queue.o 00:04:04.309 CXX test/cpp_headers/reduce.o 00:04:04.309 CXX test/cpp_headers/rpc.o 00:04:04.309 CXX test/cpp_headers/scheduler.o 00:04:04.309 CXX test/cpp_headers/scsi.o 00:04:04.309 CXX test/cpp_headers/scsi_spec.o 00:04:04.309 CXX test/cpp_headers/sock.o 00:04:04.309 CXX test/cpp_headers/stdinc.o 00:04:04.309 CXX test/cpp_headers/string.o 00:04:04.309 CXX test/cpp_headers/thread.o 00:04:04.567 CXX test/cpp_headers/trace.o 00:04:04.567 CXX test/cpp_headers/trace_parser.o 00:04:04.567 CXX test/cpp_headers/tree.o 00:04:04.567 CXX test/cpp_headers/ublk.o 00:04:04.567 CXX test/cpp_headers/util.o 00:04:04.567 CXX test/cpp_headers/uuid.o 00:04:04.567 CXX test/cpp_headers/version.o 00:04:04.567 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.567 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.567 CXX test/cpp_headers/vhost.o 00:04:04.567 CXX test/cpp_headers/vmd.o 00:04:04.567 CXX test/cpp_headers/xor.o 00:04:04.825 CXX test/cpp_headers/zipf.o 00:04:05.083 LINK cuse 00:04:07.682 LINK esnap 00:04:07.682 00:04:07.682 real 1m34.789s 00:04:07.682 user 8m59.775s 00:04:07.682 sys 1m42.583s 00:04:07.682 11:15:50 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:07.682 11:15:50 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.682 ************************************ 00:04:07.682 END TEST make 00:04:07.682 ************************************ 00:04:07.682 11:15:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.682 11:15:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.682 11:15:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.682 11:15:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.682 11:15:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.682 11:15:50 -- pm/common@44 -- $ pid=5450 00:04:07.682 11:15:50 -- pm/common@50 -- $ kill -TERM 5450 00:04:07.682 11:15:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.682 11:15:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.682 11:15:50 -- pm/common@44 -- $ pid=5451 00:04:07.682 11:15:50 -- pm/common@50 -- $ kill -TERM 5451 00:04:07.682 11:15:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:07.682 11:15:50 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:07.940 11:15:50 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.940 11:15:50 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.940 11:15:50 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.940 11:15:50 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.940 11:15:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.940 11:15:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.940 11:15:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.940 11:15:50 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.940 11:15:50 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.940 11:15:50 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.940 11:15:50 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.940 11:15:50 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.940 11:15:50 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.940 11:15:50 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.940 11:15:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.940 11:15:50 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.940 11:15:50 -- scripts/common.sh@345 -- # : 1 00:04:07.940 11:15:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.940 11:15:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.940 11:15:50 -- scripts/common.sh@365 -- # decimal 1 00:04:07.940 11:15:50 -- scripts/common.sh@353 -- # local d=1 00:04:07.940 11:15:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.940 11:15:50 -- scripts/common.sh@355 -- # echo 1 00:04:07.940 11:15:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.940 11:15:50 -- scripts/common.sh@366 -- # decimal 2 00:04:07.940 11:15:50 -- scripts/common.sh@353 -- # local d=2 00:04:07.940 11:15:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.940 11:15:50 -- scripts/common.sh@355 -- # echo 2 00:04:07.940 11:15:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.940 11:15:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.940 11:15:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.940 11:15:50 -- scripts/common.sh@368 -- # return 0 00:04:07.940 11:15:50 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.940 11:15:50 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.940 --rc genhtml_branch_coverage=1 00:04:07.940 --rc genhtml_function_coverage=1 00:04:07.940 --rc genhtml_legend=1 00:04:07.940 --rc geninfo_all_blocks=1 00:04:07.940 --rc geninfo_unexecuted_blocks=1 00:04:07.940 00:04:07.940 ' 00:04:07.940 11:15:50 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.940 --rc genhtml_branch_coverage=1 00:04:07.941 --rc genhtml_function_coverage=1 00:04:07.941 --rc genhtml_legend=1 00:04:07.941 --rc geninfo_all_blocks=1 00:04:07.941 --rc geninfo_unexecuted_blocks=1 00:04:07.941 00:04:07.941 ' 00:04:07.941 11:15:50 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.941 --rc genhtml_branch_coverage=1 00:04:07.941 --rc genhtml_function_coverage=1 00:04:07.941 --rc genhtml_legend=1 00:04:07.941 --rc geninfo_all_blocks=1 00:04:07.941 --rc geninfo_unexecuted_blocks=1 00:04:07.941 00:04:07.941 ' 00:04:07.941 11:15:50 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.941 --rc genhtml_branch_coverage=1 00:04:07.941 --rc genhtml_function_coverage=1 00:04:07.941 --rc genhtml_legend=1 00:04:07.941 --rc geninfo_all_blocks=1 00:04:07.941 --rc geninfo_unexecuted_blocks=1 00:04:07.941 00:04:07.941 ' 00:04:07.941 11:15:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.941 11:15:50 -- nvmf/common.sh@7 -- # uname -s 00:04:07.941 11:15:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.941 11:15:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.941 11:15:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.941 11:15:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.941 11:15:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.941 11:15:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.941 11:15:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.941 11:15:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.941 11:15:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.941 11:15:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.941 11:15:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:04:07.941 11:15:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:04:07.941 11:15:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.941 11:15:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.941 11:15:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.941 11:15:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.941 11:15:50 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.941 11:15:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.941 11:15:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.941 11:15:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.941 11:15:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.941 11:15:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.941 11:15:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.941 11:15:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.941 11:15:50 -- paths/export.sh@5 -- # export PATH 00:04:07.941 11:15:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.941 11:15:50 -- nvmf/common.sh@51 -- # : 0 00:04:07.941 11:15:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.941 11:15:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.941 11:15:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.941 11:15:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.941 11:15:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.941 11:15:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.941 11:15:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.941 11:15:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.941 11:15:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.941 11:15:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.941 11:15:50 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.941 11:15:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.941 11:15:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.941 11:15:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.941 11:15:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.941 11:15:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.941 11:15:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.941 11:15:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.941 11:15:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.941 11:15:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54968 00:04:07.941 11:15:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.941 11:15:50 -- pm/common@17 -- # local monitor 00:04:07.941 11:15:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.941 11:15:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.199 11:15:50 -- pm/common@21 -- # date +%s 00:04:08.199 11:15:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731669350 00:04:08.199 11:15:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.199 11:15:50 -- pm/common@25 -- # sleep 1 00:04:08.199 11:15:50 -- pm/common@21 -- # date +%s 00:04:08.199 11:15:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731669350 00:04:08.199 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731669350_collect-cpu-load.pm.log 00:04:08.199 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731669350_collect-vmstat.pm.log 00:04:09.132 11:15:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.132 11:15:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.132 11:15:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.132 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.132 11:15:51 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.132 11:15:51 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:09.132 11:15:51 -- common/autotest_common.sh@10 -- # set +x 00:04:09.132 11:15:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:09.132 11:15:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:09.132 11:15:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:09.132 11:15:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.132 11:15:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:09.132 11:15:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.132 11:15:51 -- common/autotest_common.sh@1455 -- # uname 00:04:09.132 11:15:51 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:09.132 11:15:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.132 11:15:51 -- common/autotest_common.sh@1475 -- # uname 00:04:09.132 11:15:51 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:09.132 11:15:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:09.132 11:15:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.132 lcov: LCOV version 1.15 00:04:09.132 11:15:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:27.210 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:27.210 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:42.116 11:16:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:42.116 11:16:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.116 11:16:23 -- common/autotest_common.sh@10 -- # set +x 00:04:42.116 11:16:23 -- spdk/autotest.sh@78 -- # rm -f 00:04:42.116 11:16:23 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:42.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.116 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:42.116 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:42.116 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:42.116 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:42.116 11:16:24 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:42.116 11:16:24 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:42.116 11:16:24 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:42.116 11:16:24 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:42.116 11:16:24 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:42.116 11:16:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:42.116 11:16:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:42.116 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:42.116 No valid GPT data, bailing 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.116 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.116 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:42.116 1+0 records in 00:04:42.116 1+0 records out 00:04:42.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135848 s, 77.2 MB/s 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:42.116 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:42.116 No valid GPT data, bailing 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.116 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.116 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:42.116 1+0 records in 00:04:42.116 1+0 records out 00:04:42.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420627 s, 249 MB/s 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:42.116 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:42.116 No valid GPT data, bailing 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.116 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.116 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:42.116 1+0 records in 00:04:42.116 1+0 records out 00:04:42.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465536 s, 225 MB/s 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:42.116 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:42.116 No valid GPT data, bailing 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.116 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.116 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:42.116 1+0 records in 00:04:42.116 1+0 records out 00:04:42.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00339702 s, 309 MB/s 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:42.116 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:42.116 No valid GPT data, bailing 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:42.116 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.116 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.116 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:42.116 1+0 records in 00:04:42.116 1+0 records out 00:04:42.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448383 s, 234 MB/s 00:04:42.116 11:16:24 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:42.116 11:16:24 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:42.116 11:16:24 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:42.116 11:16:24 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:42.117 11:16:24 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:42.117 No valid GPT data, bailing 00:04:42.117 11:16:24 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:42.117 11:16:24 -- scripts/common.sh@394 -- # pt= 00:04:42.117 11:16:24 -- scripts/common.sh@395 -- # return 1 00:04:42.117 11:16:24 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:42.117 1+0 records in 00:04:42.117 1+0 records out 00:04:42.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481021 s, 218 MB/s 00:04:42.117 11:16:24 -- spdk/autotest.sh@105 -- # sync 00:04:42.117 11:16:24 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:42.117 11:16:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:42.117 11:16:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:44.028 11:16:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:44.028 11:16:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:44.028 11:16:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:44.028 11:16:26 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:44.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.852 Hugepages 00:04:44.852 node hugesize free / total 00:04:44.852 node0 1048576kB 0 / 0 00:04:44.852 node0 2048kB 0 / 0 00:04:44.852 00:04:44.852 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.139 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:45.140 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:45.140 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:45.402 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:45.402 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:45.402 11:16:28 -- spdk/autotest.sh@117 -- # uname -s 00:04:45.402 11:16:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:45.402 11:16:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:45.402 11:16:28 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.534 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.534 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.534 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.534 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.534 11:16:29 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:47.909 11:16:30 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:47.909 11:16:30 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:47.909 11:16:30 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:47.909 11:16:30 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:47.909 11:16:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:47.909 11:16:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:47.909 11:16:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.909 11:16:30 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:47.909 11:16:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:47.909 11:16:30 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:47.909 11:16:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:47.909 11:16:30 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.168 Waiting for block devices as requested 00:04:48.168 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.426 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:48.426 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.699 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:53.699 11:16:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:53.699 11:16:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:53.699 11:16:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.699 11:16:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:53.699 11:16:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:53.699 11:16:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:53.699 11:16:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:53.699 11:16:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:53.699 11:16:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:53.699 11:16:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:53.699 11:16:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:53.699 11:16:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:53.699 11:16:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1541 -- # continue 00:04:53.700 11:16:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:53.700 11:16:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1541 -- # continue 00:04:53.700 11:16:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:53.700 11:16:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1541 -- # continue 00:04:53.700 11:16:36 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:53.700 11:16:36 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:53.700 11:16:36 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:53.700 11:16:36 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:53.700 11:16:36 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:53.700 11:16:36 -- common/autotest_common.sh@1541 -- # continue 00:04:53.700 11:16:36 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.700 11:16:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.700 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 11:16:36 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.700 11:16:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.700 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.959 11:16:36 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.154 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.154 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.154 11:16:37 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:55.154 11:16:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.154 11:16:37 -- common/autotest_common.sh@10 -- # set +x 00:04:55.154 11:16:37 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:55.154 11:16:37 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:55.154 11:16:37 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:55.154 11:16:37 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:55.154 11:16:37 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:55.154 11:16:37 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:55.154 11:16:37 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:55.154 11:16:37 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:55.154 11:16:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:55.154 11:16:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:55.154 11:16:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.154 11:16:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.154 11:16:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:55.154 11:16:38 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:55.154 11:16:38 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:55.154 11:16:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.154 11:16:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.154 11:16:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.154 11:16:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.154 11:16:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.154 11:16:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.154 11:16:38 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:55.154 11:16:38 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:55.154 11:16:38 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:55.154 11:16:38 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:55.154 11:16:38 -- common/autotest_common.sh@1570 -- # return 0 00:04:55.154 11:16:38 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:55.154 11:16:38 -- common/autotest_common.sh@1578 -- # return 0 00:04:55.154 11:16:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:55.154 11:16:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:55.154 11:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.154 11:16:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:55.154 11:16:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:55.154 11:16:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.154 11:16:38 -- common/autotest_common.sh@10 -- # set +x 00:04:55.154 11:16:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:55.154 11:16:38 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:55.154 11:16:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.154 11:16:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.154 11:16:38 -- common/autotest_common.sh@10 -- # set +x 00:04:55.154 ************************************ 00:04:55.154 START TEST env 00:04:55.154 ************************************ 00:04:55.154 11:16:38 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:55.412 * Looking for test storage... 00:04:55.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.412 11:16:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.412 11:16:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.412 11:16:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.412 11:16:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.412 11:16:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.412 11:16:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.412 11:16:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.412 11:16:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.412 11:16:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.412 11:16:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.412 11:16:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.412 11:16:38 env -- scripts/common.sh@344 -- # case "$op" in 00:04:55.412 11:16:38 env -- scripts/common.sh@345 -- # : 1 00:04:55.412 11:16:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.412 11:16:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.412 11:16:38 env -- scripts/common.sh@365 -- # decimal 1 00:04:55.412 11:16:38 env -- scripts/common.sh@353 -- # local d=1 00:04:55.412 11:16:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.412 11:16:38 env -- scripts/common.sh@355 -- # echo 1 00:04:55.412 11:16:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.412 11:16:38 env -- scripts/common.sh@366 -- # decimal 2 00:04:55.412 11:16:38 env -- scripts/common.sh@353 -- # local d=2 00:04:55.412 11:16:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.412 11:16:38 env -- scripts/common.sh@355 -- # echo 2 00:04:55.412 11:16:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.412 11:16:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.412 11:16:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.412 11:16:38 env -- scripts/common.sh@368 -- # return 0 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.412 11:16:38 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.412 --rc genhtml_branch_coverage=1 00:04:55.412 --rc genhtml_function_coverage=1 00:04:55.412 --rc genhtml_legend=1 00:04:55.413 --rc geninfo_all_blocks=1 00:04:55.413 --rc geninfo_unexecuted_blocks=1 00:04:55.413 00:04:55.413 ' 00:04:55.413 11:16:38 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.413 --rc genhtml_branch_coverage=1 00:04:55.413 --rc genhtml_function_coverage=1 00:04:55.413 --rc genhtml_legend=1 00:04:55.413 --rc geninfo_all_blocks=1 00:04:55.413 --rc geninfo_unexecuted_blocks=1 00:04:55.413 00:04:55.413 ' 00:04:55.413 11:16:38 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.413 --rc genhtml_branch_coverage=1 00:04:55.413 --rc genhtml_function_coverage=1 00:04:55.413 --rc genhtml_legend=1 00:04:55.413 --rc geninfo_all_blocks=1 00:04:55.413 --rc geninfo_unexecuted_blocks=1 00:04:55.413 00:04:55.413 ' 00:04:55.413 11:16:38 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.413 --rc genhtml_branch_coverage=1 00:04:55.413 --rc genhtml_function_coverage=1 00:04:55.413 --rc genhtml_legend=1 00:04:55.413 --rc geninfo_all_blocks=1 00:04:55.413 --rc geninfo_unexecuted_blocks=1 00:04:55.413 00:04:55.413 ' 00:04:55.413 11:16:38 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:55.413 11:16:38 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.413 11:16:38 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.413 11:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.413 ************************************ 00:04:55.413 START TEST env_memory 00:04:55.413 ************************************ 00:04:55.413 11:16:38 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:55.413 00:04:55.413 00:04:55.413 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.413 http://cunit.sourceforge.net/ 00:04:55.413 00:04:55.413 00:04:55.413 Suite: memory 00:04:55.413 Test: alloc and free memory map ...[2024-11-15 11:16:38.330774] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:55.671 passed 00:04:55.671 Test: mem map translation ...[2024-11-15 11:16:38.391413] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:55.671 [2024-11-15 11:16:38.391514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:55.671 [2024-11-15 11:16:38.391615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:55.671 [2024-11-15 11:16:38.391667] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:55.671 passed 00:04:55.671 Test: mem map registration ...[2024-11-15 11:16:38.490440] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:55.671 [2024-11-15 11:16:38.490546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:55.671 passed 00:04:55.930 Test: mem map adjacent registrations ...passed 00:04:55.930 00:04:55.930 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.930 suites 1 1 n/a 0 0 00:04:55.930 tests 4 4 4 0 0 00:04:55.930 asserts 152 152 152 0 n/a 00:04:55.930 00:04:55.930 Elapsed time = 0.344 seconds 00:04:55.930 00:04:55.930 real 0m0.387s 00:04:55.930 user 0m0.355s 00:04:55.930 sys 0m0.023s 00:04:55.930 11:16:38 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.930 ************************************ 00:04:55.930 END TEST env_memory 00:04:55.930 ************************************ 00:04:55.930 11:16:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:55.930 11:16:38 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.930 11:16:38 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.930 11:16:38 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.930 11:16:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.930 ************************************ 00:04:55.930 START TEST env_vtophys 00:04:55.930 ************************************ 00:04:55.930 11:16:38 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.930 EAL: lib.eal log level changed from notice to debug 00:04:55.930 EAL: Detected lcore 0 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 1 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 2 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 3 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 4 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 5 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 6 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 7 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 8 as core 0 on socket 0 00:04:55.930 EAL: Detected lcore 9 as core 0 on socket 0 00:04:55.930 EAL: Maximum logical cores by configuration: 128 00:04:55.930 EAL: Detected CPU lcores: 10 00:04:55.930 EAL: Detected NUMA nodes: 1 00:04:55.930 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:55.930 EAL: Detected shared linkage of DPDK 00:04:55.930 EAL: No shared files mode enabled, IPC will be disabled 00:04:55.930 EAL: Selected IOVA mode 'PA' 00:04:55.930 EAL: Probing VFIO support... 00:04:55.930 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:55.930 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:55.930 EAL: Ask a virtual area of 0x2e000 bytes 00:04:55.930 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:55.930 EAL: Setting up physically contiguous memory... 00:04:55.930 EAL: Setting maximum number of open files to 524288 00:04:55.930 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:55.930 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:55.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.930 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:55.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.930 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:55.930 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:55.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.930 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:55.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.930 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:55.930 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:55.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.930 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:55.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.930 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:55.930 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:55.930 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.930 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:55.930 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.930 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.930 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:55.930 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:55.930 EAL: Hugepages will be freed exactly as allocated. 00:04:55.930 EAL: No shared files mode enabled, IPC is disabled 00:04:55.930 EAL: No shared files mode enabled, IPC is disabled 00:04:56.189 EAL: TSC frequency is ~2200000 KHz 00:04:56.189 EAL: Main lcore 0 is ready (tid=7f6f6801ba40;cpuset=[0]) 00:04:56.189 EAL: Trying to obtain current memory policy. 00:04:56.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.189 EAL: Restoring previous memory policy: 0 00:04:56.189 EAL: request: mp_malloc_sync 00:04:56.189 EAL: No shared files mode enabled, IPC is disabled 00:04:56.189 EAL: Heap on socket 0 was expanded by 2MB 00:04:56.189 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:56.189 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:56.189 EAL: Mem event callback 'spdk:(nil)' registered 00:04:56.189 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:56.189 00:04:56.189 00:04:56.189 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.189 http://cunit.sourceforge.net/ 00:04:56.189 00:04:56.189 00:04:56.189 Suite: components_suite 00:04:56.755 Test: vtophys_malloc_test ...passed 00:04:56.755 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 4MB 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was shrunk by 4MB 00:04:56.755 EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 6MB 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was shrunk by 6MB 00:04:56.755 EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 10MB 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was shrunk by 10MB 00:04:56.755 EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 18MB 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was shrunk by 18MB 00:04:56.755 EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 34MB 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was shrunk by 34MB 00:04:56.755 EAL: Trying to obtain current memory policy. 00:04:56.755 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.755 EAL: Restoring previous memory policy: 4 00:04:56.755 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.755 EAL: request: mp_malloc_sync 00:04:56.755 EAL: No shared files mode enabled, IPC is disabled 00:04:56.755 EAL: Heap on socket 0 was expanded by 66MB 00:04:57.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.013 EAL: request: mp_malloc_sync 00:04:57.013 EAL: No shared files mode enabled, IPC is disabled 00:04:57.013 EAL: Heap on socket 0 was shrunk by 66MB 00:04:57.013 EAL: Trying to obtain current memory policy. 00:04:57.013 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.013 EAL: Restoring previous memory policy: 4 00:04:57.013 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.013 EAL: request: mp_malloc_sync 00:04:57.013 EAL: No shared files mode enabled, IPC is disabled 00:04:57.013 EAL: Heap on socket 0 was expanded by 130MB 00:04:57.271 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.271 EAL: request: mp_malloc_sync 00:04:57.271 EAL: No shared files mode enabled, IPC is disabled 00:04:57.271 EAL: Heap on socket 0 was shrunk by 130MB 00:04:57.529 EAL: Trying to obtain current memory policy. 00:04:57.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:57.529 EAL: Restoring previous memory policy: 4 00:04:57.529 EAL: Calling mem event callback 'spdk:(nil)' 00:04:57.529 EAL: request: mp_malloc_sync 00:04:57.529 EAL: No shared files mode enabled, IPC is disabled 00:04:57.529 EAL: Heap on socket 0 was expanded by 258MB 00:04:57.787 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.047 EAL: request: mp_malloc_sync 00:04:58.047 EAL: No shared files mode enabled, IPC is disabled 00:04:58.047 EAL: Heap on socket 0 was shrunk by 258MB 00:04:58.305 EAL: Trying to obtain current memory policy. 00:04:58.305 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.563 EAL: Restoring previous memory policy: 4 00:04:58.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.563 EAL: request: mp_malloc_sync 00:04:58.563 EAL: No shared files mode enabled, IPC is disabled 00:04:58.563 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.478 EAL: request: mp_malloc_sync 00:04:59.478 EAL: No shared files mode enabled, IPC is disabled 00:04:59.478 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.413 EAL: Trying to obtain current memory policy. 00:05:00.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.413 EAL: Restoring previous memory policy: 4 00:05:00.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.413 EAL: request: mp_malloc_sync 00:05:00.413 EAL: No shared files mode enabled, IPC is disabled 00:05:00.413 EAL: Heap on socket 0 was expanded by 1026MB 00:05:02.312 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.312 EAL: request: mp_malloc_sync 00:05:02.312 EAL: No shared files mode enabled, IPC is disabled 00:05:02.312 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:03.688 passed 00:05:03.688 00:05:03.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.688 suites 1 1 n/a 0 0 00:05:03.688 tests 2 2 2 0 0 00:05:03.688 asserts 5684 5684 5684 0 n/a 00:05:03.688 00:05:03.688 Elapsed time = 7.610 seconds 00:05:03.688 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.688 EAL: request: mp_malloc_sync 00:05:03.688 EAL: No shared files mode enabled, IPC is disabled 00:05:03.688 EAL: Heap on socket 0 was shrunk by 2MB 00:05:03.688 EAL: No shared files mode enabled, IPC is disabled 00:05:03.688 EAL: No shared files mode enabled, IPC is disabled 00:05:03.688 EAL: No shared files mode enabled, IPC is disabled 00:05:03.946 00:05:03.946 real 0m7.966s 00:05:03.946 user 0m6.758s 00:05:03.946 sys 0m1.043s 00:05:03.946 11:16:46 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.946 11:16:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:03.946 ************************************ 00:05:03.946 END TEST env_vtophys 00:05:03.946 ************************************ 00:05:03.946 11:16:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:03.946 11:16:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.946 11:16:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.946 11:16:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.946 ************************************ 00:05:03.946 START TEST env_pci 00:05:03.946 ************************************ 00:05:03.946 11:16:46 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:03.946 00:05:03.946 00:05:03.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.946 http://cunit.sourceforge.net/ 00:05:03.946 00:05:03.946 00:05:03.946 Suite: pci 00:05:03.946 Test: pci_hook ...[2024-11-15 11:16:46.755263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57805 has claimed it 00:05:03.946 passed 00:05:03.946 00:05:03.946 EAL: Cannot find device (10000:00:01.0) 00:05:03.946 EAL: Failed to attach device on primary process 00:05:03.946 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.946 suites 1 1 n/a 0 0 00:05:03.946 tests 1 1 1 0 0 00:05:03.946 asserts 25 25 25 0 n/a 00:05:03.946 00:05:03.946 Elapsed time = 0.008 seconds 00:05:03.946 00:05:03.946 real 0m0.082s 00:05:03.946 user 0m0.044s 00:05:03.946 sys 0m0.037s 00:05:03.946 11:16:46 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.946 11:16:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:03.947 ************************************ 00:05:03.947 END TEST env_pci 00:05:03.947 ************************************ 00:05:03.947 11:16:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:03.947 11:16:46 env -- env/env.sh@15 -- # uname 00:05:03.947 11:16:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:03.947 11:16:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:03.947 11:16:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:03.947 11:16:46 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:03.947 11:16:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.947 11:16:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.947 ************************************ 00:05:03.947 START TEST env_dpdk_post_init 00:05:03.947 ************************************ 00:05:03.947 11:16:46 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.205 EAL: Detected CPU lcores: 10 00:05:04.205 EAL: Detected NUMA nodes: 1 00:05:04.205 EAL: Detected shared linkage of DPDK 00:05:04.205 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.205 EAL: Selected IOVA mode 'PA' 00:05:04.205 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.205 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:04.205 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:04.205 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:04.205 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:04.462 Starting DPDK initialization... 00:05:04.462 Starting SPDK post initialization... 00:05:04.462 SPDK NVMe probe 00:05:04.462 Attaching to 0000:00:10.0 00:05:04.462 Attaching to 0000:00:11.0 00:05:04.462 Attaching to 0000:00:12.0 00:05:04.462 Attaching to 0000:00:13.0 00:05:04.462 Attached to 0000:00:10.0 00:05:04.462 Attached to 0000:00:11.0 00:05:04.462 Attached to 0000:00:13.0 00:05:04.462 Attached to 0000:00:12.0 00:05:04.462 Cleaning up... 00:05:04.462 00:05:04.462 real 0m0.317s 00:05:04.462 user 0m0.112s 00:05:04.462 sys 0m0.108s 00:05:04.462 11:16:47 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.462 ************************************ 00:05:04.462 END TEST env_dpdk_post_init 00:05:04.462 ************************************ 00:05:04.462 11:16:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.462 11:16:47 env -- env/env.sh@26 -- # uname 00:05:04.462 11:16:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:04.463 11:16:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.463 11:16:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.463 11:16:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.463 11:16:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.463 ************************************ 00:05:04.463 START TEST env_mem_callbacks 00:05:04.463 ************************************ 00:05:04.463 11:16:47 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:04.463 EAL: Detected CPU lcores: 10 00:05:04.463 EAL: Detected NUMA nodes: 1 00:05:04.463 EAL: Detected shared linkage of DPDK 00:05:04.463 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.463 EAL: Selected IOVA mode 'PA' 00:05:04.720 00:05:04.720 00:05:04.721 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.721 http://cunit.sourceforge.net/ 00:05:04.721 00:05:04.721 00:05:04.721 Suite: memory 00:05:04.721 Test: test ... 00:05:04.721 register 0x200000200000 2097152 00:05:04.721 malloc 3145728 00:05:04.721 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.721 register 0x200000400000 4194304 00:05:04.721 buf 0x2000004fffc0 len 3145728 PASSED 00:05:04.721 malloc 64 00:05:04.721 buf 0x2000004ffec0 len 64 PASSED 00:05:04.721 malloc 4194304 00:05:04.721 register 0x200000800000 6291456 00:05:04.721 buf 0x2000009fffc0 len 4194304 PASSED 00:05:04.721 free 0x2000004fffc0 3145728 00:05:04.721 free 0x2000004ffec0 64 00:05:04.721 unregister 0x200000400000 4194304 PASSED 00:05:04.721 free 0x2000009fffc0 4194304 00:05:04.721 unregister 0x200000800000 6291456 PASSED 00:05:04.721 malloc 8388608 00:05:04.721 register 0x200000400000 10485760 00:05:04.721 buf 0x2000005fffc0 len 8388608 PASSED 00:05:04.721 free 0x2000005fffc0 8388608 00:05:04.721 unregister 0x200000400000 10485760 PASSED 00:05:04.721 passed 00:05:04.721 00:05:04.721 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.721 suites 1 1 n/a 0 0 00:05:04.721 tests 1 1 1 0 0 00:05:04.721 asserts 15 15 15 0 n/a 00:05:04.721 00:05:04.721 Elapsed time = 0.077 seconds 00:05:04.721 00:05:04.721 real 0m0.283s 00:05:04.721 user 0m0.106s 00:05:04.721 sys 0m0.074s 00:05:04.721 11:16:47 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.721 ************************************ 00:05:04.721 END TEST env_mem_callbacks 00:05:04.721 ************************************ 00:05:04.721 11:16:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 END TEST env 00:05:04.721 ************************************ 00:05:04.721 00:05:04.721 real 0m9.503s 00:05:04.721 user 0m7.579s 00:05:04.721 sys 0m1.525s 00:05:04.721 11:16:47 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.721 11:16:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 11:16:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:04.721 11:16:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.721 11:16:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.721 11:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:04.721 ************************************ 00:05:04.721 START TEST rpc 00:05:04.721 ************************************ 00:05:04.721 11:16:47 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:04.978 * Looking for test storage... 00:05:04.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.979 11:16:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.979 11:16:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.979 11:16:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.979 11:16:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.979 11:16:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.979 11:16:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.979 11:16:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.979 11:16:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.979 11:16:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.979 11:16:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.979 11:16:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.979 11:16:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.979 11:16:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.979 11:16:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.979 11:16:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.979 --rc genhtml_branch_coverage=1 00:05:04.979 --rc genhtml_function_coverage=1 00:05:04.979 --rc genhtml_legend=1 00:05:04.979 --rc geninfo_all_blocks=1 00:05:04.979 --rc geninfo_unexecuted_blocks=1 00:05:04.979 00:05:04.979 ' 00:05:04.979 11:16:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57933 00:05:04.979 11:16:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:04.979 11:16:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.979 11:16:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57933 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@833 -- # '[' -z 57933 ']' 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.979 11:16:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.236 [2024-11-15 11:16:47.936895] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:05.236 [2024-11-15 11:16:47.937298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57933 ] 00:05:05.236 [2024-11-15 11:16:48.113023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.493 [2024-11-15 11:16:48.241646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:05.493 [2024-11-15 11:16:48.241924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57933' to capture a snapshot of events at runtime. 00:05:05.493 [2024-11-15 11:16:48.242083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:05.493 [2024-11-15 11:16:48.242152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:05.493 [2024-11-15 11:16:48.242281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57933 for offline analysis/debug. 00:05:05.493 [2024-11-15 11:16:48.243678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.429 11:16:49 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.429 11:16:49 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:06.429 11:16:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.429 11:16:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:06.429 11:16:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:06.429 11:16:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:06.429 11:16:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.429 11:16:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.429 11:16:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 ************************************ 00:05:06.429 START TEST rpc_integrity 00:05:06.429 ************************************ 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:06.429 { 00:05:06.429 "name": "Malloc0", 00:05:06.429 "aliases": [ 00:05:06.429 "f5c63fc1-7cd6-4bc6-be66-85880be4782b" 00:05:06.429 ], 00:05:06.429 "product_name": "Malloc disk", 00:05:06.429 "block_size": 512, 00:05:06.429 "num_blocks": 16384, 00:05:06.429 "uuid": "f5c63fc1-7cd6-4bc6-be66-85880be4782b", 00:05:06.429 "assigned_rate_limits": { 00:05:06.429 "rw_ios_per_sec": 0, 00:05:06.429 "rw_mbytes_per_sec": 0, 00:05:06.429 "r_mbytes_per_sec": 0, 00:05:06.429 "w_mbytes_per_sec": 0 00:05:06.429 }, 00:05:06.429 "claimed": false, 00:05:06.429 "zoned": false, 00:05:06.429 "supported_io_types": { 00:05:06.429 "read": true, 00:05:06.429 "write": true, 00:05:06.429 "unmap": true, 00:05:06.429 "flush": true, 00:05:06.429 "reset": true, 00:05:06.429 "nvme_admin": false, 00:05:06.429 "nvme_io": false, 00:05:06.429 "nvme_io_md": false, 00:05:06.429 "write_zeroes": true, 00:05:06.429 "zcopy": true, 00:05:06.429 "get_zone_info": false, 00:05:06.429 "zone_management": false, 00:05:06.429 "zone_append": false, 00:05:06.429 "compare": false, 00:05:06.429 "compare_and_write": false, 00:05:06.429 "abort": true, 00:05:06.429 "seek_hole": false, 00:05:06.429 "seek_data": false, 00:05:06.429 "copy": true, 00:05:06.429 "nvme_iov_md": false 00:05:06.429 }, 00:05:06.429 "memory_domains": [ 00:05:06.429 { 00:05:06.429 "dma_device_id": "system", 00:05:06.429 "dma_device_type": 1 00:05:06.429 }, 00:05:06.429 { 00:05:06.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.429 "dma_device_type": 2 00:05:06.429 } 00:05:06.429 ], 00:05:06.429 "driver_specific": {} 00:05:06.429 } 00:05:06.429 ]' 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 [2024-11-15 11:16:49.361477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:06.429 [2024-11-15 11:16:49.361562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:06.429 [2024-11-15 11:16:49.361618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:06.429 [2024-11-15 11:16:49.361643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:06.429 [2024-11-15 11:16:49.364806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:06.429 [2024-11-15 11:16:49.364861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:06.429 Passthru0 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.429 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.429 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:06.689 { 00:05:06.689 "name": "Malloc0", 00:05:06.689 "aliases": [ 00:05:06.689 "f5c63fc1-7cd6-4bc6-be66-85880be4782b" 00:05:06.689 ], 00:05:06.689 "product_name": "Malloc disk", 00:05:06.689 "block_size": 512, 00:05:06.689 "num_blocks": 16384, 00:05:06.689 "uuid": "f5c63fc1-7cd6-4bc6-be66-85880be4782b", 00:05:06.689 "assigned_rate_limits": { 00:05:06.689 "rw_ios_per_sec": 0, 00:05:06.689 "rw_mbytes_per_sec": 0, 00:05:06.689 "r_mbytes_per_sec": 0, 00:05:06.689 "w_mbytes_per_sec": 0 00:05:06.689 }, 00:05:06.689 "claimed": true, 00:05:06.689 "claim_type": "exclusive_write", 00:05:06.689 "zoned": false, 00:05:06.689 "supported_io_types": { 00:05:06.689 "read": true, 00:05:06.689 "write": true, 00:05:06.689 "unmap": true, 00:05:06.689 "flush": true, 00:05:06.689 "reset": true, 00:05:06.689 "nvme_admin": false, 00:05:06.689 "nvme_io": false, 00:05:06.689 "nvme_io_md": false, 00:05:06.689 "write_zeroes": true, 00:05:06.689 "zcopy": true, 00:05:06.689 "get_zone_info": false, 00:05:06.689 "zone_management": false, 00:05:06.689 "zone_append": false, 00:05:06.689 "compare": false, 00:05:06.689 "compare_and_write": false, 00:05:06.689 "abort": true, 00:05:06.689 "seek_hole": false, 00:05:06.689 "seek_data": false, 00:05:06.689 "copy": true, 00:05:06.689 "nvme_iov_md": false 00:05:06.689 }, 00:05:06.689 "memory_domains": [ 00:05:06.689 { 00:05:06.689 "dma_device_id": "system", 00:05:06.689 "dma_device_type": 1 00:05:06.689 }, 00:05:06.689 { 00:05:06.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.689 "dma_device_type": 2 00:05:06.689 } 00:05:06.689 ], 00:05:06.689 "driver_specific": {} 00:05:06.689 }, 00:05:06.689 { 00:05:06.689 "name": "Passthru0", 00:05:06.689 "aliases": [ 00:05:06.689 "af4b2b9f-1a8e-50f5-82b4-e91675e4e1d8" 00:05:06.689 ], 00:05:06.689 "product_name": "passthru", 00:05:06.689 "block_size": 512, 00:05:06.689 "num_blocks": 16384, 00:05:06.689 "uuid": "af4b2b9f-1a8e-50f5-82b4-e91675e4e1d8", 00:05:06.689 "assigned_rate_limits": { 00:05:06.689 "rw_ios_per_sec": 0, 00:05:06.689 "rw_mbytes_per_sec": 0, 00:05:06.689 "r_mbytes_per_sec": 0, 00:05:06.689 "w_mbytes_per_sec": 0 00:05:06.689 }, 00:05:06.689 "claimed": false, 00:05:06.689 "zoned": false, 00:05:06.689 "supported_io_types": { 00:05:06.689 "read": true, 00:05:06.689 "write": true, 00:05:06.689 "unmap": true, 00:05:06.689 "flush": true, 00:05:06.689 "reset": true, 00:05:06.689 "nvme_admin": false, 00:05:06.689 "nvme_io": false, 00:05:06.689 "nvme_io_md": false, 00:05:06.689 "write_zeroes": true, 00:05:06.689 "zcopy": true, 00:05:06.689 "get_zone_info": false, 00:05:06.689 "zone_management": false, 00:05:06.689 "zone_append": false, 00:05:06.689 "compare": false, 00:05:06.689 "compare_and_write": false, 00:05:06.689 "abort": true, 00:05:06.689 "seek_hole": false, 00:05:06.689 "seek_data": false, 00:05:06.689 "copy": true, 00:05:06.689 "nvme_iov_md": false 00:05:06.689 }, 00:05:06.689 "memory_domains": [ 00:05:06.689 { 00:05:06.689 "dma_device_id": "system", 00:05:06.689 "dma_device_type": 1 00:05:06.689 }, 00:05:06.689 { 00:05:06.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.689 "dma_device_type": 2 00:05:06.689 } 00:05:06.689 ], 00:05:06.689 "driver_specific": { 00:05:06.689 "passthru": { 00:05:06.689 "name": "Passthru0", 00:05:06.689 "base_bdev_name": "Malloc0" 00:05:06.689 } 00:05:06.689 } 00:05:06.689 } 00:05:06.689 ]' 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:06.689 11:16:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:06.689 00:05:06.689 real 0m0.349s 00:05:06.689 user 0m0.228s 00:05:06.689 sys 0m0.031s 00:05:06.689 ************************************ 00:05:06.689 END TEST rpc_integrity 00:05:06.689 ************************************ 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:06.689 11:16:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.689 11:16:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.689 11:16:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 ************************************ 00:05:06.689 START TEST rpc_plugins 00:05:06.689 ************************************ 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:06.689 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:06.689 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.689 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.689 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:06.689 { 00:05:06.689 "name": "Malloc1", 00:05:06.689 "aliases": [ 00:05:06.689 "a2c7a811-c6c1-4804-bab9-01c22e33e117" 00:05:06.689 ], 00:05:06.689 "product_name": "Malloc disk", 00:05:06.689 "block_size": 4096, 00:05:06.689 "num_blocks": 256, 00:05:06.689 "uuid": "a2c7a811-c6c1-4804-bab9-01c22e33e117", 00:05:06.689 "assigned_rate_limits": { 00:05:06.689 "rw_ios_per_sec": 0, 00:05:06.689 "rw_mbytes_per_sec": 0, 00:05:06.689 "r_mbytes_per_sec": 0, 00:05:06.689 "w_mbytes_per_sec": 0 00:05:06.689 }, 00:05:06.689 "claimed": false, 00:05:06.689 "zoned": false, 00:05:06.689 "supported_io_types": { 00:05:06.689 "read": true, 00:05:06.689 "write": true, 00:05:06.689 "unmap": true, 00:05:06.689 "flush": true, 00:05:06.689 "reset": true, 00:05:06.689 "nvme_admin": false, 00:05:06.689 "nvme_io": false, 00:05:06.689 "nvme_io_md": false, 00:05:06.689 "write_zeroes": true, 00:05:06.689 "zcopy": true, 00:05:06.689 "get_zone_info": false, 00:05:06.689 "zone_management": false, 00:05:06.689 "zone_append": false, 00:05:06.689 "compare": false, 00:05:06.689 "compare_and_write": false, 00:05:06.690 "abort": true, 00:05:06.690 "seek_hole": false, 00:05:06.690 "seek_data": false, 00:05:06.690 "copy": true, 00:05:06.690 "nvme_iov_md": false 00:05:06.690 }, 00:05:06.690 "memory_domains": [ 00:05:06.690 { 00:05:06.690 "dma_device_id": "system", 00:05:06.690 "dma_device_type": 1 00:05:06.690 }, 00:05:06.690 { 00:05:06.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:06.690 "dma_device_type": 2 00:05:06.690 } 00:05:06.690 ], 00:05:06.690 "driver_specific": {} 00:05:06.690 } 00:05:06.690 ]' 00:05:06.690 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:06.949 ************************************ 00:05:06.949 END TEST rpc_plugins 00:05:06.949 ************************************ 00:05:06.949 11:16:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:06.949 00:05:06.949 real 0m0.168s 00:05:06.949 user 0m0.104s 00:05:06.949 sys 0m0.021s 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.949 11:16:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:06.949 11:16:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:06.949 11:16:49 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.949 11:16:49 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.949 11:16:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.949 ************************************ 00:05:06.949 START TEST rpc_trace_cmd_test 00:05:06.949 ************************************ 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.949 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:06.949 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57933", 00:05:06.949 "tpoint_group_mask": "0x8", 00:05:06.949 "iscsi_conn": { 00:05:06.949 "mask": "0x2", 00:05:06.949 "tpoint_mask": "0x0" 00:05:06.949 }, 00:05:06.949 "scsi": { 00:05:06.949 "mask": "0x4", 00:05:06.949 "tpoint_mask": "0x0" 00:05:06.949 }, 00:05:06.949 "bdev": { 00:05:06.949 "mask": "0x8", 00:05:06.949 "tpoint_mask": "0xffffffffffffffff" 00:05:06.949 }, 00:05:06.949 "nvmf_rdma": { 00:05:06.949 "mask": "0x10", 00:05:06.949 "tpoint_mask": "0x0" 00:05:06.949 }, 00:05:06.949 "nvmf_tcp": { 00:05:06.949 "mask": "0x20", 00:05:06.949 "tpoint_mask": "0x0" 00:05:06.949 }, 00:05:06.949 "ftl": { 00:05:06.950 "mask": "0x40", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "blobfs": { 00:05:06.950 "mask": "0x80", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "dsa": { 00:05:06.950 "mask": "0x200", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "thread": { 00:05:06.950 "mask": "0x400", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "nvme_pcie": { 00:05:06.950 "mask": "0x800", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "iaa": { 00:05:06.950 "mask": "0x1000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "nvme_tcp": { 00:05:06.950 "mask": "0x2000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "bdev_nvme": { 00:05:06.950 "mask": "0x4000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "sock": { 00:05:06.950 "mask": "0x8000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "blob": { 00:05:06.950 "mask": "0x10000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "bdev_raid": { 00:05:06.950 "mask": "0x20000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 }, 00:05:06.950 "scheduler": { 00:05:06.950 "mask": "0x40000", 00:05:06.950 "tpoint_mask": "0x0" 00:05:06.950 } 00:05:06.950 }' 00:05:06.950 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:06.950 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:06.950 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:07.214 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:07.214 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:07.214 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:07.214 11:16:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:07.214 11:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:07.214 11:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:07.214 11:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:07.214 ************************************ 00:05:07.214 END TEST rpc_trace_cmd_test 00:05:07.214 ************************************ 00:05:07.214 00:05:07.214 real 0m0.273s 00:05:07.214 user 0m0.237s 00:05:07.214 sys 0m0.026s 00:05:07.214 11:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.214 11:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:07.214 11:16:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:07.214 11:16:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:07.214 11:16:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:07.214 11:16:50 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.214 11:16:50 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.214 11:16:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.214 ************************************ 00:05:07.214 START TEST rpc_daemon_integrity 00:05:07.214 ************************************ 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.214 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.481 { 00:05:07.481 "name": "Malloc2", 00:05:07.481 "aliases": [ 00:05:07.481 "0ec43ccc-9179-442e-aee5-c9dfbf9b6d0b" 00:05:07.481 ], 00:05:07.481 "product_name": "Malloc disk", 00:05:07.481 "block_size": 512, 00:05:07.481 "num_blocks": 16384, 00:05:07.481 "uuid": "0ec43ccc-9179-442e-aee5-c9dfbf9b6d0b", 00:05:07.481 "assigned_rate_limits": { 00:05:07.481 "rw_ios_per_sec": 0, 00:05:07.481 "rw_mbytes_per_sec": 0, 00:05:07.481 "r_mbytes_per_sec": 0, 00:05:07.481 "w_mbytes_per_sec": 0 00:05:07.481 }, 00:05:07.481 "claimed": false, 00:05:07.481 "zoned": false, 00:05:07.481 "supported_io_types": { 00:05:07.481 "read": true, 00:05:07.481 "write": true, 00:05:07.481 "unmap": true, 00:05:07.481 "flush": true, 00:05:07.481 "reset": true, 00:05:07.481 "nvme_admin": false, 00:05:07.481 "nvme_io": false, 00:05:07.481 "nvme_io_md": false, 00:05:07.481 "write_zeroes": true, 00:05:07.481 "zcopy": true, 00:05:07.481 "get_zone_info": false, 00:05:07.481 "zone_management": false, 00:05:07.481 "zone_append": false, 00:05:07.481 "compare": false, 00:05:07.481 "compare_and_write": false, 00:05:07.481 "abort": true, 00:05:07.481 "seek_hole": false, 00:05:07.481 "seek_data": false, 00:05:07.481 "copy": true, 00:05:07.481 "nvme_iov_md": false 00:05:07.481 }, 00:05:07.481 "memory_domains": [ 00:05:07.481 { 00:05:07.481 "dma_device_id": "system", 00:05:07.481 "dma_device_type": 1 00:05:07.481 }, 00:05:07.481 { 00:05:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.481 "dma_device_type": 2 00:05:07.481 } 00:05:07.481 ], 00:05:07.481 "driver_specific": {} 00:05:07.481 } 00:05:07.481 ]' 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.481 [2024-11-15 11:16:50.303585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:07.481 [2024-11-15 11:16:50.303828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.481 [2024-11-15 11:16:50.303903] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:07.481 [2024-11-15 11:16:50.304080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.481 [2024-11-15 11:16:50.307543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.481 [2024-11-15 11:16:50.307595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.481 Passthru0 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.481 { 00:05:07.481 "name": "Malloc2", 00:05:07.481 "aliases": [ 00:05:07.481 "0ec43ccc-9179-442e-aee5-c9dfbf9b6d0b" 00:05:07.481 ], 00:05:07.481 "product_name": "Malloc disk", 00:05:07.481 "block_size": 512, 00:05:07.481 "num_blocks": 16384, 00:05:07.481 "uuid": "0ec43ccc-9179-442e-aee5-c9dfbf9b6d0b", 00:05:07.481 "assigned_rate_limits": { 00:05:07.481 "rw_ios_per_sec": 0, 00:05:07.481 "rw_mbytes_per_sec": 0, 00:05:07.481 "r_mbytes_per_sec": 0, 00:05:07.481 "w_mbytes_per_sec": 0 00:05:07.481 }, 00:05:07.481 "claimed": true, 00:05:07.481 "claim_type": "exclusive_write", 00:05:07.481 "zoned": false, 00:05:07.481 "supported_io_types": { 00:05:07.481 "read": true, 00:05:07.481 "write": true, 00:05:07.481 "unmap": true, 00:05:07.481 "flush": true, 00:05:07.481 "reset": true, 00:05:07.481 "nvme_admin": false, 00:05:07.481 "nvme_io": false, 00:05:07.481 "nvme_io_md": false, 00:05:07.481 "write_zeroes": true, 00:05:07.481 "zcopy": true, 00:05:07.481 "get_zone_info": false, 00:05:07.481 "zone_management": false, 00:05:07.481 "zone_append": false, 00:05:07.481 "compare": false, 00:05:07.481 "compare_and_write": false, 00:05:07.481 "abort": true, 00:05:07.481 "seek_hole": false, 00:05:07.481 "seek_data": false, 00:05:07.481 "copy": true, 00:05:07.481 "nvme_iov_md": false 00:05:07.481 }, 00:05:07.481 "memory_domains": [ 00:05:07.481 { 00:05:07.481 "dma_device_id": "system", 00:05:07.481 "dma_device_type": 1 00:05:07.481 }, 00:05:07.481 { 00:05:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.481 "dma_device_type": 2 00:05:07.481 } 00:05:07.481 ], 00:05:07.481 "driver_specific": {} 00:05:07.481 }, 00:05:07.481 { 00:05:07.481 "name": "Passthru0", 00:05:07.481 "aliases": [ 00:05:07.481 "29c32eb0-c8b3-5593-8ef9-da8cbc348382" 00:05:07.481 ], 00:05:07.481 "product_name": "passthru", 00:05:07.481 "block_size": 512, 00:05:07.481 "num_blocks": 16384, 00:05:07.481 "uuid": "29c32eb0-c8b3-5593-8ef9-da8cbc348382", 00:05:07.481 "assigned_rate_limits": { 00:05:07.481 "rw_ios_per_sec": 0, 00:05:07.481 "rw_mbytes_per_sec": 0, 00:05:07.481 "r_mbytes_per_sec": 0, 00:05:07.481 "w_mbytes_per_sec": 0 00:05:07.481 }, 00:05:07.481 "claimed": false, 00:05:07.481 "zoned": false, 00:05:07.481 "supported_io_types": { 00:05:07.481 "read": true, 00:05:07.481 "write": true, 00:05:07.481 "unmap": true, 00:05:07.481 "flush": true, 00:05:07.481 "reset": true, 00:05:07.481 "nvme_admin": false, 00:05:07.481 "nvme_io": false, 00:05:07.481 "nvme_io_md": false, 00:05:07.481 "write_zeroes": true, 00:05:07.481 "zcopy": true, 00:05:07.481 "get_zone_info": false, 00:05:07.481 "zone_management": false, 00:05:07.481 "zone_append": false, 00:05:07.481 "compare": false, 00:05:07.481 "compare_and_write": false, 00:05:07.481 "abort": true, 00:05:07.481 "seek_hole": false, 00:05:07.481 "seek_data": false, 00:05:07.481 "copy": true, 00:05:07.481 "nvme_iov_md": false 00:05:07.481 }, 00:05:07.481 "memory_domains": [ 00:05:07.481 { 00:05:07.481 "dma_device_id": "system", 00:05:07.481 "dma_device_type": 1 00:05:07.481 }, 00:05:07.481 { 00:05:07.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.481 "dma_device_type": 2 00:05:07.481 } 00:05:07.481 ], 00:05:07.481 "driver_specific": { 00:05:07.481 "passthru": { 00:05:07.481 "name": "Passthru0", 00:05:07.481 "base_bdev_name": "Malloc2" 00:05:07.481 } 00:05:07.481 } 00:05:07.481 } 00:05:07.481 ]' 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.481 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.745 00:05:07.745 real 0m0.361s 00:05:07.745 user 0m0.218s 00:05:07.745 sys 0m0.047s 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.745 11:16:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.745 ************************************ 00:05:07.745 END TEST rpc_daemon_integrity 00:05:07.745 ************************************ 00:05:07.745 11:16:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:07.745 11:16:50 rpc -- rpc/rpc.sh@84 -- # killprocess 57933 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@952 -- # '[' -z 57933 ']' 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@956 -- # kill -0 57933 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@957 -- # uname 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57933 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:07.745 killing process with pid 57933 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57933' 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@971 -- # kill 57933 00:05:07.745 11:16:50 rpc -- common/autotest_common.sh@976 -- # wait 57933 00:05:10.277 00:05:10.277 real 0m5.217s 00:05:10.277 user 0m5.866s 00:05:10.277 sys 0m0.901s 00:05:10.277 11:16:52 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.277 ************************************ 00:05:10.277 END TEST rpc 00:05:10.277 ************************************ 00:05:10.277 11:16:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.277 11:16:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.277 11:16:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.277 11:16:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.277 11:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.277 ************************************ 00:05:10.277 START TEST skip_rpc 00:05:10.277 ************************************ 00:05:10.277 11:16:52 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:10.277 * Looking for test storage... 00:05:10.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.277 11:16:52 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.277 11:16:52 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.277 11:16:52 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.277 11:16:53 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.277 11:16:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:10.278 11:16:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.278 11:16:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.278 11:16:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.278 11:16:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.278 --rc genhtml_branch_coverage=1 00:05:10.278 --rc genhtml_function_coverage=1 00:05:10.278 --rc genhtml_legend=1 00:05:10.278 --rc geninfo_all_blocks=1 00:05:10.278 --rc geninfo_unexecuted_blocks=1 00:05:10.278 00:05:10.278 ' 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.278 --rc genhtml_branch_coverage=1 00:05:10.278 --rc genhtml_function_coverage=1 00:05:10.278 --rc genhtml_legend=1 00:05:10.278 --rc geninfo_all_blocks=1 00:05:10.278 --rc geninfo_unexecuted_blocks=1 00:05:10.278 00:05:10.278 ' 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.278 --rc genhtml_branch_coverage=1 00:05:10.278 --rc genhtml_function_coverage=1 00:05:10.278 --rc genhtml_legend=1 00:05:10.278 --rc geninfo_all_blocks=1 00:05:10.278 --rc geninfo_unexecuted_blocks=1 00:05:10.278 00:05:10.278 ' 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.278 --rc genhtml_branch_coverage=1 00:05:10.278 --rc genhtml_function_coverage=1 00:05:10.278 --rc genhtml_legend=1 00:05:10.278 --rc geninfo_all_blocks=1 00:05:10.278 --rc geninfo_unexecuted_blocks=1 00:05:10.278 00:05:10.278 ' 00:05:10.278 11:16:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:10.278 11:16:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:10.278 11:16:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.278 11:16:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.278 ************************************ 00:05:10.278 START TEST skip_rpc 00:05:10.278 ************************************ 00:05:10.278 11:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:10.278 11:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58162 00:05:10.278 11:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.278 11:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:10.278 11:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:10.278 [2024-11-15 11:16:53.184658] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:10.278 [2024-11-15 11:16:53.184827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 00:05:10.536 [2024-11-15 11:16:53.361948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.795 [2024-11-15 11:16:53.497184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.061 11:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58162 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58162 ']' 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58162 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58162 00:05:16.062 killing process with pid 58162 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58162' 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58162 00:05:16.062 11:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58162 00:05:17.435 00:05:17.435 real 0m7.306s 00:05:17.435 user 0m6.710s 00:05:17.435 sys 0m0.489s 00:05:17.435 11:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:17.435 ************************************ 00:05:17.435 11:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.435 END TEST skip_rpc 00:05:17.435 ************************************ 00:05:17.694 11:17:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.694 11:17:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:17.694 11:17:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.694 11:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.694 ************************************ 00:05:17.694 START TEST skip_rpc_with_json 00:05:17.694 ************************************ 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58266 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58266 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58266 ']' 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.694 11:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.694 [2024-11-15 11:17:00.557869] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:17.694 [2024-11-15 11:17:00.558093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58266 ] 00:05:17.954 [2024-11-15 11:17:00.747569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.954 [2024-11-15 11:17:00.882548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.890 [2024-11-15 11:17:01.816960] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.890 request: 00:05:18.890 { 00:05:18.890 "trtype": "tcp", 00:05:18.890 "method": "nvmf_get_transports", 00:05:18.890 "req_id": 1 00:05:18.890 } 00:05:18.890 Got JSON-RPC error response 00:05:18.890 response: 00:05:18.890 { 00:05:18.890 "code": -19, 00:05:18.890 "message": "No such device" 00:05:18.890 } 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.890 [2024-11-15 11:17:01.829114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.890 11:17:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.149 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.149 11:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.149 { 00:05:19.149 "subsystems": [ 00:05:19.149 { 00:05:19.149 "subsystem": "fsdev", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "fsdev_set_opts", 00:05:19.149 "params": { 00:05:19.149 "fsdev_io_pool_size": 65535, 00:05:19.149 "fsdev_io_cache_size": 256 00:05:19.149 } 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "keyring", 00:05:19.149 "config": [] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "iobuf", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "iobuf_set_options", 00:05:19.149 "params": { 00:05:19.149 "small_pool_count": 8192, 00:05:19.149 "large_pool_count": 1024, 00:05:19.149 "small_bufsize": 8192, 00:05:19.149 "large_bufsize": 135168, 00:05:19.149 "enable_numa": false 00:05:19.149 } 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "sock", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "sock_set_default_impl", 00:05:19.149 "params": { 00:05:19.149 "impl_name": "posix" 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "sock_impl_set_options", 00:05:19.149 "params": { 00:05:19.149 "impl_name": "ssl", 00:05:19.149 "recv_buf_size": 4096, 00:05:19.149 "send_buf_size": 4096, 00:05:19.149 "enable_recv_pipe": true, 00:05:19.149 "enable_quickack": false, 00:05:19.149 "enable_placement_id": 0, 00:05:19.149 "enable_zerocopy_send_server": true, 00:05:19.149 "enable_zerocopy_send_client": false, 00:05:19.149 "zerocopy_threshold": 0, 00:05:19.149 "tls_version": 0, 00:05:19.149 "enable_ktls": false 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "sock_impl_set_options", 00:05:19.149 "params": { 00:05:19.149 "impl_name": "posix", 00:05:19.149 "recv_buf_size": 2097152, 00:05:19.149 "send_buf_size": 2097152, 00:05:19.149 "enable_recv_pipe": true, 00:05:19.149 "enable_quickack": false, 00:05:19.149 "enable_placement_id": 0, 00:05:19.149 "enable_zerocopy_send_server": true, 00:05:19.149 "enable_zerocopy_send_client": false, 00:05:19.149 "zerocopy_threshold": 0, 00:05:19.149 "tls_version": 0, 00:05:19.149 "enable_ktls": false 00:05:19.149 } 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "vmd", 00:05:19.149 "config": [] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "accel", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "accel_set_options", 00:05:19.149 "params": { 00:05:19.149 "small_cache_size": 128, 00:05:19.149 "large_cache_size": 16, 00:05:19.149 "task_count": 2048, 00:05:19.149 "sequence_count": 2048, 00:05:19.149 "buf_count": 2048 00:05:19.149 } 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "bdev", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "bdev_set_options", 00:05:19.149 "params": { 00:05:19.149 "bdev_io_pool_size": 65535, 00:05:19.149 "bdev_io_cache_size": 256, 00:05:19.149 "bdev_auto_examine": true, 00:05:19.149 "iobuf_small_cache_size": 128, 00:05:19.149 "iobuf_large_cache_size": 16 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "bdev_raid_set_options", 00:05:19.149 "params": { 00:05:19.149 "process_window_size_kb": 1024, 00:05:19.149 "process_max_bandwidth_mb_sec": 0 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "bdev_iscsi_set_options", 00:05:19.149 "params": { 00:05:19.149 "timeout_sec": 30 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "bdev_nvme_set_options", 00:05:19.149 "params": { 00:05:19.149 "action_on_timeout": "none", 00:05:19.149 "timeout_us": 0, 00:05:19.149 "timeout_admin_us": 0, 00:05:19.149 "keep_alive_timeout_ms": 10000, 00:05:19.149 "arbitration_burst": 0, 00:05:19.149 "low_priority_weight": 0, 00:05:19.149 "medium_priority_weight": 0, 00:05:19.149 "high_priority_weight": 0, 00:05:19.149 "nvme_adminq_poll_period_us": 10000, 00:05:19.149 "nvme_ioq_poll_period_us": 0, 00:05:19.149 "io_queue_requests": 0, 00:05:19.149 "delay_cmd_submit": true, 00:05:19.149 "transport_retry_count": 4, 00:05:19.149 "bdev_retry_count": 3, 00:05:19.149 "transport_ack_timeout": 0, 00:05:19.149 "ctrlr_loss_timeout_sec": 0, 00:05:19.149 "reconnect_delay_sec": 0, 00:05:19.149 "fast_io_fail_timeout_sec": 0, 00:05:19.149 "disable_auto_failback": false, 00:05:19.149 "generate_uuids": false, 00:05:19.149 "transport_tos": 0, 00:05:19.149 "nvme_error_stat": false, 00:05:19.149 "rdma_srq_size": 0, 00:05:19.149 "io_path_stat": false, 00:05:19.149 "allow_accel_sequence": false, 00:05:19.149 "rdma_max_cq_size": 0, 00:05:19.149 "rdma_cm_event_timeout_ms": 0, 00:05:19.149 "dhchap_digests": [ 00:05:19.149 "sha256", 00:05:19.149 "sha384", 00:05:19.149 "sha512" 00:05:19.149 ], 00:05:19.149 "dhchap_dhgroups": [ 00:05:19.149 "null", 00:05:19.149 "ffdhe2048", 00:05:19.149 "ffdhe3072", 00:05:19.149 "ffdhe4096", 00:05:19.149 "ffdhe6144", 00:05:19.149 "ffdhe8192" 00:05:19.149 ] 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "bdev_nvme_set_hotplug", 00:05:19.149 "params": { 00:05:19.149 "period_us": 100000, 00:05:19.149 "enable": false 00:05:19.149 } 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "method": "bdev_wait_for_examine" 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "scsi", 00:05:19.149 "config": null 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "scheduler", 00:05:19.149 "config": [ 00:05:19.149 { 00:05:19.149 "method": "framework_set_scheduler", 00:05:19.149 "params": { 00:05:19.149 "name": "static" 00:05:19.149 } 00:05:19.149 } 00:05:19.149 ] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "vhost_scsi", 00:05:19.149 "config": [] 00:05:19.149 }, 00:05:19.149 { 00:05:19.149 "subsystem": "vhost_blk", 00:05:19.150 "config": [] 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "subsystem": "ublk", 00:05:19.150 "config": [] 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "subsystem": "nbd", 00:05:19.150 "config": [] 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "subsystem": "nvmf", 00:05:19.150 "config": [ 00:05:19.150 { 00:05:19.150 "method": "nvmf_set_config", 00:05:19.150 "params": { 00:05:19.150 "discovery_filter": "match_any", 00:05:19.150 "admin_cmd_passthru": { 00:05:19.150 "identify_ctrlr": false 00:05:19.150 }, 00:05:19.150 "dhchap_digests": [ 00:05:19.150 "sha256", 00:05:19.150 "sha384", 00:05:19.150 "sha512" 00:05:19.150 ], 00:05:19.150 "dhchap_dhgroups": [ 00:05:19.150 "null", 00:05:19.150 "ffdhe2048", 00:05:19.150 "ffdhe3072", 00:05:19.150 "ffdhe4096", 00:05:19.150 "ffdhe6144", 00:05:19.150 "ffdhe8192" 00:05:19.150 ] 00:05:19.150 } 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "method": "nvmf_set_max_subsystems", 00:05:19.150 "params": { 00:05:19.150 "max_subsystems": 1024 00:05:19.150 } 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "method": "nvmf_set_crdt", 00:05:19.150 "params": { 00:05:19.150 "crdt1": 0, 00:05:19.150 "crdt2": 0, 00:05:19.150 "crdt3": 0 00:05:19.150 } 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "method": "nvmf_create_transport", 00:05:19.150 "params": { 00:05:19.150 "trtype": "TCP", 00:05:19.150 "max_queue_depth": 128, 00:05:19.150 "max_io_qpairs_per_ctrlr": 127, 00:05:19.150 "in_capsule_data_size": 4096, 00:05:19.150 "max_io_size": 131072, 00:05:19.150 "io_unit_size": 131072, 00:05:19.150 "max_aq_depth": 128, 00:05:19.150 "num_shared_buffers": 511, 00:05:19.150 "buf_cache_size": 4294967295, 00:05:19.150 "dif_insert_or_strip": false, 00:05:19.150 "zcopy": false, 00:05:19.150 "c2h_success": true, 00:05:19.150 "sock_priority": 0, 00:05:19.150 "abort_timeout_sec": 1, 00:05:19.150 "ack_timeout": 0, 00:05:19.150 "data_wr_pool_size": 0 00:05:19.150 } 00:05:19.150 } 00:05:19.150 ] 00:05:19.150 }, 00:05:19.150 { 00:05:19.150 "subsystem": "iscsi", 00:05:19.150 "config": [ 00:05:19.150 { 00:05:19.150 "method": "iscsi_set_options", 00:05:19.150 "params": { 00:05:19.150 "node_base": "iqn.2016-06.io.spdk", 00:05:19.150 "max_sessions": 128, 00:05:19.150 "max_connections_per_session": 2, 00:05:19.150 "max_queue_depth": 64, 00:05:19.150 "default_time2wait": 2, 00:05:19.150 "default_time2retain": 20, 00:05:19.150 "first_burst_length": 8192, 00:05:19.150 "immediate_data": true, 00:05:19.150 "allow_duplicated_isid": false, 00:05:19.150 "error_recovery_level": 0, 00:05:19.150 "nop_timeout": 60, 00:05:19.150 "nop_in_interval": 30, 00:05:19.150 "disable_chap": false, 00:05:19.150 "require_chap": false, 00:05:19.150 "mutual_chap": false, 00:05:19.150 "chap_group": 0, 00:05:19.150 "max_large_datain_per_connection": 64, 00:05:19.150 "max_r2t_per_connection": 4, 00:05:19.150 "pdu_pool_size": 36864, 00:05:19.150 "immediate_data_pool_size": 16384, 00:05:19.150 "data_out_pool_size": 2048 00:05:19.150 } 00:05:19.150 } 00:05:19.150 ] 00:05:19.150 } 00:05:19.150 ] 00:05:19.150 } 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58266 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58266 ']' 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58266 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58266 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:19.150 killing process with pid 58266 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58266' 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58266 00:05:19.150 11:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58266 00:05:21.694 11:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58317 00:05:21.694 11:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.694 11:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58317 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58317 ']' 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58317 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58317 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.967 killing process with pid 58317 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58317' 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58317 00:05:26.967 11:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58317 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:28.869 00:05:28.869 real 0m11.180s 00:05:28.869 user 0m10.594s 00:05:28.869 sys 0m1.070s 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.869 ************************************ 00:05:28.869 END TEST skip_rpc_with_json 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.869 ************************************ 00:05:28.869 11:17:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.869 11:17:11 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.869 11:17:11 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.869 11:17:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.869 ************************************ 00:05:28.869 START TEST skip_rpc_with_delay 00:05:28.869 ************************************ 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.869 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.869 [2024-11-15 11:17:11.800157] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:29.128 00:05:29.128 real 0m0.215s 00:05:29.128 user 0m0.127s 00:05:29.128 sys 0m0.085s 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:29.128 11:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.128 ************************************ 00:05:29.128 END TEST skip_rpc_with_delay 00:05:29.128 ************************************ 00:05:29.128 11:17:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.128 11:17:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.128 11:17:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.128 11:17:11 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:29.128 11:17:11 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:29.128 11:17:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.128 ************************************ 00:05:29.128 START TEST exit_on_failed_rpc_init 00:05:29.128 ************************************ 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58456 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58456 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58456 ']' 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.128 11:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.128 [2024-11-15 11:17:12.063634] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:29.128 [2024-11-15 11:17:12.063875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58456 ] 00:05:29.387 [2024-11-15 11:17:12.252999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.646 [2024-11-15 11:17:12.378522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.581 11:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.581 [2024-11-15 11:17:13.330040] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:30.581 [2024-11-15 11:17:13.330224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58474 ] 00:05:30.581 [2024-11-15 11:17:13.523774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.841 [2024-11-15 11:17:13.703993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.841 [2024-11-15 11:17:13.704140] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.841 [2024-11-15 11:17:13.704168] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.841 [2024-11-15 11:17:13.704204] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58456 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58456 ']' 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58456 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58456 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.099 killing process with pid 58456 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58456' 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58456 00:05:31.099 11:17:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58456 00:05:33.633 00:05:33.633 real 0m4.227s 00:05:33.633 user 0m4.663s 00:05:33.633 sys 0m0.680s 00:05:33.633 11:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.633 ************************************ 00:05:33.633 END TEST exit_on_failed_rpc_init 00:05:33.633 ************************************ 00:05:33.633 11:17:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:33.633 11:17:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.633 00:05:33.633 real 0m23.331s 00:05:33.633 user 0m22.280s 00:05:33.633 sys 0m2.528s 00:05:33.633 11:17:16 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.633 11:17:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.633 ************************************ 00:05:33.633 END TEST skip_rpc 00:05:33.633 ************************************ 00:05:33.633 11:17:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.633 11:17:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.633 11:17:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.633 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.633 ************************************ 00:05:33.633 START TEST rpc_client 00:05:33.633 ************************************ 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.633 * Looking for test storage... 00:05:33.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.633 11:17:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.633 11:17:16 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.633 --rc genhtml_branch_coverage=1 00:05:33.634 --rc genhtml_function_coverage=1 00:05:33.634 --rc genhtml_legend=1 00:05:33.634 --rc geninfo_all_blocks=1 00:05:33.634 --rc geninfo_unexecuted_blocks=1 00:05:33.634 00:05:33.634 ' 00:05:33.634 11:17:16 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.634 --rc genhtml_branch_coverage=1 00:05:33.634 --rc genhtml_function_coverage=1 00:05:33.634 --rc genhtml_legend=1 00:05:33.634 --rc geninfo_all_blocks=1 00:05:33.634 --rc geninfo_unexecuted_blocks=1 00:05:33.634 00:05:33.634 ' 00:05:33.634 11:17:16 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.634 --rc genhtml_branch_coverage=1 00:05:33.634 --rc genhtml_function_coverage=1 00:05:33.634 --rc genhtml_legend=1 00:05:33.634 --rc geninfo_all_blocks=1 00:05:33.634 --rc geninfo_unexecuted_blocks=1 00:05:33.634 00:05:33.634 ' 00:05:33.634 11:17:16 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.634 --rc genhtml_branch_coverage=1 00:05:33.634 --rc genhtml_function_coverage=1 00:05:33.634 --rc genhtml_legend=1 00:05:33.634 --rc geninfo_all_blocks=1 00:05:33.634 --rc geninfo_unexecuted_blocks=1 00:05:33.634 00:05:33.634 ' 00:05:33.634 11:17:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:33.634 OK 00:05:33.634 11:17:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.634 00:05:33.634 real 0m0.233s 00:05:33.634 user 0m0.135s 00:05:33.634 sys 0m0.106s 00:05:33.634 11:17:16 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.634 11:17:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:33.634 ************************************ 00:05:33.634 END TEST rpc_client 00:05:33.634 ************************************ 00:05:33.634 11:17:16 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.634 11:17:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.634 11:17:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.634 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.634 ************************************ 00:05:33.634 START TEST json_config 00:05:33.634 ************************************ 00:05:33.634 11:17:16 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.893 11:17:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.893 11:17:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.893 11:17:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.893 11:17:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.893 11:17:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.893 11:17:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:33.893 11:17:16 json_config -- scripts/common.sh@345 -- # : 1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.893 11:17:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.893 11:17:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@353 -- # local d=1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.893 11:17:16 json_config -- scripts/common.sh@355 -- # echo 1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.893 11:17:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@353 -- # local d=2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.893 11:17:16 json_config -- scripts/common.sh@355 -- # echo 2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.893 11:17:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.893 11:17:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.893 11:17:16 json_config -- scripts/common.sh@368 -- # return 0 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:33.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.893 --rc genhtml_branch_coverage=1 00:05:33.893 --rc genhtml_function_coverage=1 00:05:33.893 --rc genhtml_legend=1 00:05:33.893 --rc geninfo_all_blocks=1 00:05:33.893 --rc geninfo_unexecuted_blocks=1 00:05:33.893 00:05:33.893 ' 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:33.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.893 --rc genhtml_branch_coverage=1 00:05:33.893 --rc genhtml_function_coverage=1 00:05:33.893 --rc genhtml_legend=1 00:05:33.893 --rc geninfo_all_blocks=1 00:05:33.893 --rc geninfo_unexecuted_blocks=1 00:05:33.893 00:05:33.893 ' 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:33.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.893 --rc genhtml_branch_coverage=1 00:05:33.893 --rc genhtml_function_coverage=1 00:05:33.893 --rc genhtml_legend=1 00:05:33.893 --rc geninfo_all_blocks=1 00:05:33.893 --rc geninfo_unexecuted_blocks=1 00:05:33.893 00:05:33.893 ' 00:05:33.893 11:17:16 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:33.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.893 --rc genhtml_branch_coverage=1 00:05:33.893 --rc genhtml_function_coverage=1 00:05:33.893 --rc genhtml_legend=1 00:05:33.893 --rc geninfo_all_blocks=1 00:05:33.893 --rc geninfo_unexecuted_blocks=1 00:05:33.893 00:05:33.893 ' 00:05:33.893 11:17:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:33.893 11:17:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:33.893 11:17:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:33.893 11:17:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:33.894 11:17:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:33.894 11:17:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:33.894 11:17:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:33.894 11:17:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:33.894 11:17:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.894 11:17:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.894 11:17:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.894 11:17:16 json_config -- paths/export.sh@5 -- # export PATH 00:05:33.894 11:17:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@51 -- # : 0 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:33.894 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:33.894 11:17:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:33.894 WARNING: No tests are enabled so not running JSON configuration tests 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:33.894 11:17:16 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:33.894 ************************************ 00:05:33.894 END TEST json_config 00:05:33.894 ************************************ 00:05:33.894 00:05:33.894 real 0m0.171s 00:05:33.894 user 0m0.105s 00:05:33.894 sys 0m0.070s 00:05:33.894 11:17:16 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.894 11:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.894 11:17:16 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.894 11:17:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.894 11:17:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.894 11:17:16 -- common/autotest_common.sh@10 -- # set +x 00:05:33.894 ************************************ 00:05:33.894 START TEST json_config_extra_key 00:05:33.894 ************************************ 00:05:33.894 11:17:16 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:33.894 11:17:16 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:33.894 11:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:33.894 11:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:34.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.154 --rc genhtml_branch_coverage=1 00:05:34.154 --rc genhtml_function_coverage=1 00:05:34.154 --rc genhtml_legend=1 00:05:34.154 --rc geninfo_all_blocks=1 00:05:34.154 --rc geninfo_unexecuted_blocks=1 00:05:34.154 00:05:34.154 ' 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:34.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.154 --rc genhtml_branch_coverage=1 00:05:34.154 --rc genhtml_function_coverage=1 00:05:34.154 --rc genhtml_legend=1 00:05:34.154 --rc geninfo_all_blocks=1 00:05:34.154 --rc geninfo_unexecuted_blocks=1 00:05:34.154 00:05:34.154 ' 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:34.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.154 --rc genhtml_branch_coverage=1 00:05:34.154 --rc genhtml_function_coverage=1 00:05:34.154 --rc genhtml_legend=1 00:05:34.154 --rc geninfo_all_blocks=1 00:05:34.154 --rc geninfo_unexecuted_blocks=1 00:05:34.154 00:05:34.154 ' 00:05:34.154 11:17:16 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:34.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.154 --rc genhtml_branch_coverage=1 00:05:34.154 --rc genhtml_function_coverage=1 00:05:34.154 --rc genhtml_legend=1 00:05:34.154 --rc geninfo_all_blocks=1 00:05:34.154 --rc geninfo_unexecuted_blocks=1 00:05:34.154 00:05:34.154 ' 00:05:34.154 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7bac1e23-9dc8-4821-9281-1e3cfea0c0df 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.154 11:17:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.154 11:17:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.154 11:17:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.154 11:17:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.154 11:17:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:34.154 11:17:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.154 11:17:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.155 11:17:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.155 11:17:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.155 11:17:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.155 INFO: launching applications... 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.155 11:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58684 00:05:34.155 Waiting for target to run... 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58684 /var/tmp/spdk_tgt.sock 00:05:34.155 11:17:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58684 ']' 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.155 11:17:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.155 [2024-11-15 11:17:17.071291] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:34.155 [2024-11-15 11:17:17.071521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58684 ] 00:05:34.722 [2024-11-15 11:17:17.552695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.722 [2024-11-15 11:17:17.664063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.658 11:17:18 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.658 11:17:18 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:35.658 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:35.658 11:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.658 INFO: shutting down applications... 00:05:35.658 11:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58684 ]] 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58684 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:35.658 11:17:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.916 11:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.916 11:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.916 11:17:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:35.916 11:17:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.482 11:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.482 11:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.482 11:17:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:36.482 11:17:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.050 11:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.050 11:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.050 11:17:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:37.050 11:17:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.685 11:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.685 11:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.685 11:17:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:37.685 11:17:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58684 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:37.943 SPDK target shutdown done 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.943 11:17:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.943 Success 00:05:37.943 11:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:37.943 ************************************ 00:05:37.943 END TEST json_config_extra_key 00:05:37.943 ************************************ 00:05:37.943 00:05:37.943 real 0m4.084s 00:05:37.943 user 0m3.822s 00:05:37.943 sys 0m0.652s 00:05:37.943 11:17:20 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.943 11:17:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:37.943 11:17:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:37.943 11:17:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.943 11:17:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.943 11:17:20 -- common/autotest_common.sh@10 -- # set +x 00:05:38.202 ************************************ 00:05:38.202 START TEST alias_rpc 00:05:38.202 ************************************ 00:05:38.202 11:17:20 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.202 * Looking for test storage... 00:05:38.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:38.202 11:17:20 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.202 11:17:20 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.202 11:17:20 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.202 11:17:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.202 --rc genhtml_branch_coverage=1 00:05:38.202 --rc genhtml_function_coverage=1 00:05:38.202 --rc genhtml_legend=1 00:05:38.202 --rc geninfo_all_blocks=1 00:05:38.202 --rc geninfo_unexecuted_blocks=1 00:05:38.202 00:05:38.202 ' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.202 --rc genhtml_branch_coverage=1 00:05:38.202 --rc genhtml_function_coverage=1 00:05:38.202 --rc genhtml_legend=1 00:05:38.202 --rc geninfo_all_blocks=1 00:05:38.202 --rc geninfo_unexecuted_blocks=1 00:05:38.202 00:05:38.202 ' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.202 --rc genhtml_branch_coverage=1 00:05:38.202 --rc genhtml_function_coverage=1 00:05:38.202 --rc genhtml_legend=1 00:05:38.202 --rc geninfo_all_blocks=1 00:05:38.202 --rc geninfo_unexecuted_blocks=1 00:05:38.202 00:05:38.202 ' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.202 --rc genhtml_branch_coverage=1 00:05:38.202 --rc genhtml_function_coverage=1 00:05:38.202 --rc genhtml_legend=1 00:05:38.202 --rc geninfo_all_blocks=1 00:05:38.202 --rc geninfo_unexecuted_blocks=1 00:05:38.202 00:05:38.202 ' 00:05:38.202 11:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.202 11:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58789 00:05:38.202 11:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.202 11:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58789 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58789 ']' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.202 11:17:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.462 [2024-11-15 11:17:21.210766] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:38.462 [2024-11-15 11:17:21.210954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58789 ] 00:05:38.462 [2024-11-15 11:17:21.396622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.720 [2024-11-15 11:17:21.518219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.656 11:17:22 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.656 11:17:22 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:39.656 11:17:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.915 11:17:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58789 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58789 ']' 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58789 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58789 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:39.915 killing process with pid 58789 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58789' 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@971 -- # kill 58789 00:05:39.915 11:17:22 alias_rpc -- common/autotest_common.sh@976 -- # wait 58789 00:05:42.449 00:05:42.449 real 0m3.932s 00:05:42.449 user 0m3.965s 00:05:42.449 sys 0m0.642s 00:05:42.449 11:17:24 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.449 11:17:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.449 ************************************ 00:05:42.449 END TEST alias_rpc 00:05:42.449 ************************************ 00:05:42.449 11:17:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:42.449 11:17:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.449 11:17:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:42.449 11:17:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.449 11:17:24 -- common/autotest_common.sh@10 -- # set +x 00:05:42.449 ************************************ 00:05:42.449 START TEST spdkcli_tcp 00:05:42.449 ************************************ 00:05:42.449 11:17:24 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.449 * Looking for test storage... 00:05:42.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:42.449 11:17:24 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.449 11:17:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.449 11:17:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.449 11:17:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.449 --rc genhtml_branch_coverage=1 00:05:42.449 --rc genhtml_function_coverage=1 00:05:42.449 --rc genhtml_legend=1 00:05:42.449 --rc geninfo_all_blocks=1 00:05:42.449 --rc geninfo_unexecuted_blocks=1 00:05:42.449 00:05:42.449 ' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.449 --rc genhtml_branch_coverage=1 00:05:42.449 --rc genhtml_function_coverage=1 00:05:42.449 --rc genhtml_legend=1 00:05:42.449 --rc geninfo_all_blocks=1 00:05:42.449 --rc geninfo_unexecuted_blocks=1 00:05:42.449 00:05:42.449 ' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.449 --rc genhtml_branch_coverage=1 00:05:42.449 --rc genhtml_function_coverage=1 00:05:42.449 --rc genhtml_legend=1 00:05:42.449 --rc geninfo_all_blocks=1 00:05:42.449 --rc geninfo_unexecuted_blocks=1 00:05:42.449 00:05:42.449 ' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.449 --rc genhtml_branch_coverage=1 00:05:42.449 --rc genhtml_function_coverage=1 00:05:42.449 --rc genhtml_legend=1 00:05:42.449 --rc geninfo_all_blocks=1 00:05:42.449 --rc geninfo_unexecuted_blocks=1 00:05:42.449 00:05:42.449 ' 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58891 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58891 00:05:42.449 11:17:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58891 ']' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:42.449 11:17:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.449 [2024-11-15 11:17:25.212921] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:42.449 [2024-11-15 11:17:25.213128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:05:42.449 [2024-11-15 11:17:25.394615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.707 [2024-11-15 11:17:25.520912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.707 [2024-11-15 11:17:25.520918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.641 11:17:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:43.641 11:17:26 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:43.641 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.641 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58913 00:05:43.641 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.901 [ 00:05:43.901 "bdev_malloc_delete", 00:05:43.901 "bdev_malloc_create", 00:05:43.901 "bdev_null_resize", 00:05:43.901 "bdev_null_delete", 00:05:43.901 "bdev_null_create", 00:05:43.901 "bdev_nvme_cuse_unregister", 00:05:43.901 "bdev_nvme_cuse_register", 00:05:43.901 "bdev_opal_new_user", 00:05:43.901 "bdev_opal_set_lock_state", 00:05:43.901 "bdev_opal_delete", 00:05:43.901 "bdev_opal_get_info", 00:05:43.901 "bdev_opal_create", 00:05:43.901 "bdev_nvme_opal_revert", 00:05:43.901 "bdev_nvme_opal_init", 00:05:43.901 "bdev_nvme_send_cmd", 00:05:43.901 "bdev_nvme_set_keys", 00:05:43.901 "bdev_nvme_get_path_iostat", 00:05:43.901 "bdev_nvme_get_mdns_discovery_info", 00:05:43.901 "bdev_nvme_stop_mdns_discovery", 00:05:43.901 "bdev_nvme_start_mdns_discovery", 00:05:43.901 "bdev_nvme_set_multipath_policy", 00:05:43.901 "bdev_nvme_set_preferred_path", 00:05:43.901 "bdev_nvme_get_io_paths", 00:05:43.901 "bdev_nvme_remove_error_injection", 00:05:43.901 "bdev_nvme_add_error_injection", 00:05:43.901 "bdev_nvme_get_discovery_info", 00:05:43.901 "bdev_nvme_stop_discovery", 00:05:43.901 "bdev_nvme_start_discovery", 00:05:43.901 "bdev_nvme_get_controller_health_info", 00:05:43.901 "bdev_nvme_disable_controller", 00:05:43.901 "bdev_nvme_enable_controller", 00:05:43.901 "bdev_nvme_reset_controller", 00:05:43.901 "bdev_nvme_get_transport_statistics", 00:05:43.902 "bdev_nvme_apply_firmware", 00:05:43.902 "bdev_nvme_detach_controller", 00:05:43.902 "bdev_nvme_get_controllers", 00:05:43.902 "bdev_nvme_attach_controller", 00:05:43.902 "bdev_nvme_set_hotplug", 00:05:43.902 "bdev_nvme_set_options", 00:05:43.902 "bdev_passthru_delete", 00:05:43.902 "bdev_passthru_create", 00:05:43.902 "bdev_lvol_set_parent_bdev", 00:05:43.902 "bdev_lvol_set_parent", 00:05:43.902 "bdev_lvol_check_shallow_copy", 00:05:43.902 "bdev_lvol_start_shallow_copy", 00:05:43.902 "bdev_lvol_grow_lvstore", 00:05:43.902 "bdev_lvol_get_lvols", 00:05:43.902 "bdev_lvol_get_lvstores", 00:05:43.902 "bdev_lvol_delete", 00:05:43.902 "bdev_lvol_set_read_only", 00:05:43.902 "bdev_lvol_resize", 00:05:43.902 "bdev_lvol_decouple_parent", 00:05:43.902 "bdev_lvol_inflate", 00:05:43.902 "bdev_lvol_rename", 00:05:43.902 "bdev_lvol_clone_bdev", 00:05:43.902 "bdev_lvol_clone", 00:05:43.902 "bdev_lvol_snapshot", 00:05:43.902 "bdev_lvol_create", 00:05:43.902 "bdev_lvol_delete_lvstore", 00:05:43.902 "bdev_lvol_rename_lvstore", 00:05:43.902 "bdev_lvol_create_lvstore", 00:05:43.902 "bdev_raid_set_options", 00:05:43.902 "bdev_raid_remove_base_bdev", 00:05:43.902 "bdev_raid_add_base_bdev", 00:05:43.902 "bdev_raid_delete", 00:05:43.902 "bdev_raid_create", 00:05:43.902 "bdev_raid_get_bdevs", 00:05:43.902 "bdev_error_inject_error", 00:05:43.902 "bdev_error_delete", 00:05:43.902 "bdev_error_create", 00:05:43.902 "bdev_split_delete", 00:05:43.902 "bdev_split_create", 00:05:43.902 "bdev_delay_delete", 00:05:43.902 "bdev_delay_create", 00:05:43.902 "bdev_delay_update_latency", 00:05:43.902 "bdev_zone_block_delete", 00:05:43.902 "bdev_zone_block_create", 00:05:43.902 "blobfs_create", 00:05:43.902 "blobfs_detect", 00:05:43.902 "blobfs_set_cache_size", 00:05:43.902 "bdev_xnvme_delete", 00:05:43.902 "bdev_xnvme_create", 00:05:43.902 "bdev_aio_delete", 00:05:43.902 "bdev_aio_rescan", 00:05:43.902 "bdev_aio_create", 00:05:43.902 "bdev_ftl_set_property", 00:05:43.902 "bdev_ftl_get_properties", 00:05:43.902 "bdev_ftl_get_stats", 00:05:43.902 "bdev_ftl_unmap", 00:05:43.902 "bdev_ftl_unload", 00:05:43.902 "bdev_ftl_delete", 00:05:43.902 "bdev_ftl_load", 00:05:43.902 "bdev_ftl_create", 00:05:43.902 "bdev_virtio_attach_controller", 00:05:43.902 "bdev_virtio_scsi_get_devices", 00:05:43.902 "bdev_virtio_detach_controller", 00:05:43.902 "bdev_virtio_blk_set_hotplug", 00:05:43.902 "bdev_iscsi_delete", 00:05:43.902 "bdev_iscsi_create", 00:05:43.902 "bdev_iscsi_set_options", 00:05:43.902 "accel_error_inject_error", 00:05:43.902 "ioat_scan_accel_module", 00:05:43.902 "dsa_scan_accel_module", 00:05:43.902 "iaa_scan_accel_module", 00:05:43.902 "keyring_file_remove_key", 00:05:43.902 "keyring_file_add_key", 00:05:43.902 "keyring_linux_set_options", 00:05:43.902 "fsdev_aio_delete", 00:05:43.902 "fsdev_aio_create", 00:05:43.902 "iscsi_get_histogram", 00:05:43.902 "iscsi_enable_histogram", 00:05:43.902 "iscsi_set_options", 00:05:43.902 "iscsi_get_auth_groups", 00:05:43.902 "iscsi_auth_group_remove_secret", 00:05:43.902 "iscsi_auth_group_add_secret", 00:05:43.902 "iscsi_delete_auth_group", 00:05:43.902 "iscsi_create_auth_group", 00:05:43.902 "iscsi_set_discovery_auth", 00:05:43.902 "iscsi_get_options", 00:05:43.902 "iscsi_target_node_request_logout", 00:05:43.902 "iscsi_target_node_set_redirect", 00:05:43.902 "iscsi_target_node_set_auth", 00:05:43.902 "iscsi_target_node_add_lun", 00:05:43.902 "iscsi_get_stats", 00:05:43.902 "iscsi_get_connections", 00:05:43.902 "iscsi_portal_group_set_auth", 00:05:43.902 "iscsi_start_portal_group", 00:05:43.902 "iscsi_delete_portal_group", 00:05:43.902 "iscsi_create_portal_group", 00:05:43.902 "iscsi_get_portal_groups", 00:05:43.902 "iscsi_delete_target_node", 00:05:43.902 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.902 "iscsi_target_node_add_pg_ig_maps", 00:05:43.902 "iscsi_create_target_node", 00:05:43.902 "iscsi_get_target_nodes", 00:05:43.902 "iscsi_delete_initiator_group", 00:05:43.902 "iscsi_initiator_group_remove_initiators", 00:05:43.902 "iscsi_initiator_group_add_initiators", 00:05:43.902 "iscsi_create_initiator_group", 00:05:43.902 "iscsi_get_initiator_groups", 00:05:43.902 "nvmf_set_crdt", 00:05:43.902 "nvmf_set_config", 00:05:43.902 "nvmf_set_max_subsystems", 00:05:43.902 "nvmf_stop_mdns_prr", 00:05:43.903 "nvmf_publish_mdns_prr", 00:05:43.903 "nvmf_subsystem_get_listeners", 00:05:43.903 "nvmf_subsystem_get_qpairs", 00:05:43.903 "nvmf_subsystem_get_controllers", 00:05:43.903 "nvmf_get_stats", 00:05:43.903 "nvmf_get_transports", 00:05:43.903 "nvmf_create_transport", 00:05:43.903 "nvmf_get_targets", 00:05:43.903 "nvmf_delete_target", 00:05:43.903 "nvmf_create_target", 00:05:43.903 "nvmf_subsystem_allow_any_host", 00:05:43.903 "nvmf_subsystem_set_keys", 00:05:43.903 "nvmf_subsystem_remove_host", 00:05:43.903 "nvmf_subsystem_add_host", 00:05:43.903 "nvmf_ns_remove_host", 00:05:43.903 "nvmf_ns_add_host", 00:05:43.903 "nvmf_subsystem_remove_ns", 00:05:43.903 "nvmf_subsystem_set_ns_ana_group", 00:05:43.903 "nvmf_subsystem_add_ns", 00:05:43.903 "nvmf_subsystem_listener_set_ana_state", 00:05:43.903 "nvmf_discovery_get_referrals", 00:05:43.903 "nvmf_discovery_remove_referral", 00:05:43.903 "nvmf_discovery_add_referral", 00:05:43.903 "nvmf_subsystem_remove_listener", 00:05:43.903 "nvmf_subsystem_add_listener", 00:05:43.903 "nvmf_delete_subsystem", 00:05:43.903 "nvmf_create_subsystem", 00:05:43.903 "nvmf_get_subsystems", 00:05:43.903 "env_dpdk_get_mem_stats", 00:05:43.903 "nbd_get_disks", 00:05:43.903 "nbd_stop_disk", 00:05:43.903 "nbd_start_disk", 00:05:43.903 "ublk_recover_disk", 00:05:43.903 "ublk_get_disks", 00:05:43.903 "ublk_stop_disk", 00:05:43.903 "ublk_start_disk", 00:05:43.903 "ublk_destroy_target", 00:05:43.903 "ublk_create_target", 00:05:43.903 "virtio_blk_create_transport", 00:05:43.903 "virtio_blk_get_transports", 00:05:43.903 "vhost_controller_set_coalescing", 00:05:43.903 "vhost_get_controllers", 00:05:43.903 "vhost_delete_controller", 00:05:43.903 "vhost_create_blk_controller", 00:05:43.903 "vhost_scsi_controller_remove_target", 00:05:43.903 "vhost_scsi_controller_add_target", 00:05:43.903 "vhost_start_scsi_controller", 00:05:43.903 "vhost_create_scsi_controller", 00:05:43.903 "thread_set_cpumask", 00:05:43.903 "scheduler_set_options", 00:05:43.903 "framework_get_governor", 00:05:43.903 "framework_get_scheduler", 00:05:43.903 "framework_set_scheduler", 00:05:43.903 "framework_get_reactors", 00:05:43.903 "thread_get_io_channels", 00:05:43.903 "thread_get_pollers", 00:05:43.903 "thread_get_stats", 00:05:43.903 "framework_monitor_context_switch", 00:05:43.903 "spdk_kill_instance", 00:05:43.903 "log_enable_timestamps", 00:05:43.903 "log_get_flags", 00:05:43.903 "log_clear_flag", 00:05:43.903 "log_set_flag", 00:05:43.903 "log_get_level", 00:05:43.903 "log_set_level", 00:05:43.903 "log_get_print_level", 00:05:43.903 "log_set_print_level", 00:05:43.903 "framework_enable_cpumask_locks", 00:05:43.903 "framework_disable_cpumask_locks", 00:05:43.903 "framework_wait_init", 00:05:43.903 "framework_start_init", 00:05:43.903 "scsi_get_devices", 00:05:43.903 "bdev_get_histogram", 00:05:43.903 "bdev_enable_histogram", 00:05:43.903 "bdev_set_qos_limit", 00:05:43.903 "bdev_set_qd_sampling_period", 00:05:43.903 "bdev_get_bdevs", 00:05:43.903 "bdev_reset_iostat", 00:05:43.903 "bdev_get_iostat", 00:05:43.903 "bdev_examine", 00:05:43.903 "bdev_wait_for_examine", 00:05:43.903 "bdev_set_options", 00:05:43.903 "accel_get_stats", 00:05:43.903 "accel_set_options", 00:05:43.903 "accel_set_driver", 00:05:43.903 "accel_crypto_key_destroy", 00:05:43.903 "accel_crypto_keys_get", 00:05:43.903 "accel_crypto_key_create", 00:05:43.903 "accel_assign_opc", 00:05:43.903 "accel_get_module_info", 00:05:43.903 "accel_get_opc_assignments", 00:05:43.903 "vmd_rescan", 00:05:43.903 "vmd_remove_device", 00:05:43.903 "vmd_enable", 00:05:43.903 "sock_get_default_impl", 00:05:43.903 "sock_set_default_impl", 00:05:43.903 "sock_impl_set_options", 00:05:43.903 "sock_impl_get_options", 00:05:43.903 "iobuf_get_stats", 00:05:43.903 "iobuf_set_options", 00:05:43.903 "keyring_get_keys", 00:05:43.903 "framework_get_pci_devices", 00:05:43.903 "framework_get_config", 00:05:43.903 "framework_get_subsystems", 00:05:43.903 "fsdev_set_opts", 00:05:43.903 "fsdev_get_opts", 00:05:43.903 "trace_get_info", 00:05:43.903 "trace_get_tpoint_group_mask", 00:05:43.903 "trace_disable_tpoint_group", 00:05:43.904 "trace_enable_tpoint_group", 00:05:43.904 "trace_clear_tpoint_mask", 00:05:43.904 "trace_set_tpoint_mask", 00:05:43.904 "notify_get_notifications", 00:05:43.904 "notify_get_types", 00:05:43.904 "spdk_get_version", 00:05:43.904 "rpc_get_methods" 00:05:43.904 ] 00:05:43.904 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.904 11:17:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58891 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58891 ']' 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58891 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58891 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:43.904 killing process with pid 58891 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58891' 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58891 00:05:43.904 11:17:26 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58891 00:05:46.439 00:05:46.439 real 0m3.955s 00:05:46.439 user 0m6.993s 00:05:46.439 sys 0m0.718s 00:05:46.439 11:17:28 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.439 ************************************ 00:05:46.439 END TEST spdkcli_tcp 00:05:46.439 ************************************ 00:05:46.439 11:17:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 11:17:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.439 11:17:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.439 11:17:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.439 11:17:28 -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 ************************************ 00:05:46.439 START TEST dpdk_mem_utility 00:05:46.439 ************************************ 00:05:46.439 11:17:28 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.439 * Looking for test storage... 00:05:46.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:46.439 11:17:28 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.439 11:17:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.439 11:17:28 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.439 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:46.439 11:17:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.440 11:17:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.440 --rc genhtml_branch_coverage=1 00:05:46.440 --rc genhtml_function_coverage=1 00:05:46.440 --rc genhtml_legend=1 00:05:46.440 --rc geninfo_all_blocks=1 00:05:46.440 --rc geninfo_unexecuted_blocks=1 00:05:46.440 00:05:46.440 ' 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.440 --rc genhtml_branch_coverage=1 00:05:46.440 --rc genhtml_function_coverage=1 00:05:46.440 --rc genhtml_legend=1 00:05:46.440 --rc geninfo_all_blocks=1 00:05:46.440 --rc geninfo_unexecuted_blocks=1 00:05:46.440 00:05:46.440 ' 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.440 --rc genhtml_branch_coverage=1 00:05:46.440 --rc genhtml_function_coverage=1 00:05:46.440 --rc genhtml_legend=1 00:05:46.440 --rc geninfo_all_blocks=1 00:05:46.440 --rc geninfo_unexecuted_blocks=1 00:05:46.440 00:05:46.440 ' 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.440 --rc genhtml_branch_coverage=1 00:05:46.440 --rc genhtml_function_coverage=1 00:05:46.440 --rc genhtml_legend=1 00:05:46.440 --rc geninfo_all_blocks=1 00:05:46.440 --rc geninfo_unexecuted_blocks=1 00:05:46.440 00:05:46.440 ' 00:05:46.440 11:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.440 11:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59007 00:05:46.440 11:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59007 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59007 ']' 00:05:46.440 11:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:46.440 11:17:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.440 [2024-11-15 11:17:29.202727] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:46.440 [2024-11-15 11:17:29.202935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:05:46.698 [2024-11-15 11:17:29.387414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.698 [2024-11-15 11:17:29.505693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.636 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.636 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:47.636 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.636 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.636 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:47.636 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.636 { 00:05:47.636 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.636 } 00:05:47.636 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:47.636 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.636 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:47.636 1 heaps totaling size 824.000000 MiB 00:05:47.636 size: 824.000000 MiB heap id: 0 00:05:47.636 end heaps---------- 00:05:47.636 9 mempools totaling size 603.782043 MiB 00:05:47.636 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.636 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.636 size: 100.555481 MiB name: bdev_io_59007 00:05:47.636 size: 50.003479 MiB name: msgpool_59007 00:05:47.636 size: 36.509338 MiB name: fsdev_io_59007 00:05:47.636 size: 21.763794 MiB name: PDU_Pool 00:05:47.636 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.636 size: 4.133484 MiB name: evtpool_59007 00:05:47.636 size: 0.026123 MiB name: Session_Pool 00:05:47.636 end mempools------- 00:05:47.636 6 memzones totaling size 4.142822 MiB 00:05:47.636 size: 1.000366 MiB name: RG_ring_0_59007 00:05:47.636 size: 1.000366 MiB name: RG_ring_1_59007 00:05:47.636 size: 1.000366 MiB name: RG_ring_4_59007 00:05:47.636 size: 1.000366 MiB name: RG_ring_5_59007 00:05:47.636 size: 0.125366 MiB name: RG_ring_2_59007 00:05:47.636 size: 0.015991 MiB name: RG_ring_3_59007 00:05:47.636 end memzones------- 00:05:47.636 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.636 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:47.636 list of free elements. size: 16.781860 MiB 00:05:47.636 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:47.636 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:47.636 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:47.636 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:47.636 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:47.636 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:47.636 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:47.636 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:47.636 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:47.636 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:47.636 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:47.636 element at address: 0x20001b400000 with size: 0.563416 MiB 00:05:47.636 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:47.636 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:47.636 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:47.636 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:47.636 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:47.636 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:47.636 list of standard malloc elements. size: 199.287231 MiB 00:05:47.636 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:47.636 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:47.636 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:47.636 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:47.636 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:47.636 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:47.636 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:47.636 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:47.636 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:47.636 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:47.636 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:47.636 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:47.636 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:47.637 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:47.638 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:47.638 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:47.638 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:47.638 list of memzone associated elements. size: 607.930908 MiB 00:05:47.638 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:47.638 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.638 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:47.638 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.638 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:47.638 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59007_0 00:05:47.638 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:47.638 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59007_0 00:05:47.638 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:47.638 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59007_0 00:05:47.638 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:47.638 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.638 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:47.638 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.638 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:47.638 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59007_0 00:05:47.639 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:47.639 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59007 00:05:47.639 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:47.639 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59007 00:05:47.639 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:47.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.639 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:47.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.639 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:47.639 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.639 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:47.639 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.639 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:47.639 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59007 00:05:47.639 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:47.639 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59007 00:05:47.639 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:47.639 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59007 00:05:47.639 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:47.639 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59007 00:05:47.639 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:47.639 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59007 00:05:47.639 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:47.639 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59007 00:05:47.639 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:47.639 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.639 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:47.639 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.639 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:47.639 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.639 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:47.639 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59007 00:05:47.639 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:47.639 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59007 00:05:47.639 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:47.639 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.639 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:47.639 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.639 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:47.639 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59007 00:05:47.639 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:47.639 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.639 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:47.639 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59007 00:05:47.639 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:47.639 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59007 00:05:47.639 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:47.639 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59007 00:05:47.639 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:47.639 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.639 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.639 11:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59007 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59007 ']' 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59007 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59007 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:47.639 killing process with pid 59007 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59007' 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59007 00:05:47.639 11:17:30 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59007 00:05:50.175 00:05:50.175 real 0m3.765s 00:05:50.175 user 0m3.728s 00:05:50.175 sys 0m0.634s 00:05:50.175 ************************************ 00:05:50.175 END TEST dpdk_mem_utility 00:05:50.175 ************************************ 00:05:50.175 11:17:32 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.175 11:17:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 11:17:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:50.175 11:17:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.175 11:17:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.175 11:17:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 ************************************ 00:05:50.175 START TEST event 00:05:50.175 ************************************ 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:50.175 * Looking for test storage... 00:05:50.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:50.175 11:17:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.175 11:17:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.175 11:17:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.175 11:17:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.175 11:17:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.175 11:17:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.175 11:17:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.175 11:17:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.175 11:17:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.175 11:17:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.175 11:17:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.175 11:17:32 event -- scripts/common.sh@344 -- # case "$op" in 00:05:50.175 11:17:32 event -- scripts/common.sh@345 -- # : 1 00:05:50.175 11:17:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.175 11:17:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.175 11:17:32 event -- scripts/common.sh@365 -- # decimal 1 00:05:50.175 11:17:32 event -- scripts/common.sh@353 -- # local d=1 00:05:50.175 11:17:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.175 11:17:32 event -- scripts/common.sh@355 -- # echo 1 00:05:50.175 11:17:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.175 11:17:32 event -- scripts/common.sh@366 -- # decimal 2 00:05:50.175 11:17:32 event -- scripts/common.sh@353 -- # local d=2 00:05:50.175 11:17:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.175 11:17:32 event -- scripts/common.sh@355 -- # echo 2 00:05:50.175 11:17:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.175 11:17:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.175 11:17:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.175 11:17:32 event -- scripts/common.sh@368 -- # return 0 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.175 --rc genhtml_branch_coverage=1 00:05:50.175 --rc genhtml_function_coverage=1 00:05:50.175 --rc genhtml_legend=1 00:05:50.175 --rc geninfo_all_blocks=1 00:05:50.175 --rc geninfo_unexecuted_blocks=1 00:05:50.175 00:05:50.175 ' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.175 --rc genhtml_branch_coverage=1 00:05:50.175 --rc genhtml_function_coverage=1 00:05:50.175 --rc genhtml_legend=1 00:05:50.175 --rc geninfo_all_blocks=1 00:05:50.175 --rc geninfo_unexecuted_blocks=1 00:05:50.175 00:05:50.175 ' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.175 --rc genhtml_branch_coverage=1 00:05:50.175 --rc genhtml_function_coverage=1 00:05:50.175 --rc genhtml_legend=1 00:05:50.175 --rc geninfo_all_blocks=1 00:05:50.175 --rc geninfo_unexecuted_blocks=1 00:05:50.175 00:05:50.175 ' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:50.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.175 --rc genhtml_branch_coverage=1 00:05:50.175 --rc genhtml_function_coverage=1 00:05:50.175 --rc genhtml_legend=1 00:05:50.175 --rc geninfo_all_blocks=1 00:05:50.175 --rc geninfo_unexecuted_blocks=1 00:05:50.175 00:05:50.175 ' 00:05:50.175 11:17:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:50.175 11:17:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:50.175 11:17:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:50.175 11:17:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.175 11:17:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 ************************************ 00:05:50.175 START TEST event_perf 00:05:50.175 ************************************ 00:05:50.175 11:17:32 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:50.175 Running I/O for 1 seconds...[2024-11-15 11:17:32.925685] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:50.175 [2024-11-15 11:17:32.925830] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:05:50.175 [2024-11-15 11:17:33.100215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.434 [2024-11-15 11:17:33.228368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.434 [2024-11-15 11:17:33.228534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.434 Running I/O for 1 seconds...[2024-11-15 11:17:33.229152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.434 [2024-11-15 11:17:33.229162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.812 00:05:51.812 lcore 0: 198829 00:05:51.812 lcore 1: 198827 00:05:51.812 lcore 2: 198829 00:05:51.812 lcore 3: 198829 00:05:51.812 done. 00:05:51.812 00:05:51.812 real 0m1.572s 00:05:51.812 user 0m4.347s 00:05:51.812 sys 0m0.105s 00:05:51.812 11:17:34 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.812 ************************************ 00:05:51.812 END TEST event_perf 00:05:51.812 ************************************ 00:05:51.812 11:17:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 11:17:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.812 11:17:34 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:51.812 11:17:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.812 11:17:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.812 ************************************ 00:05:51.812 START TEST event_reactor 00:05:51.812 ************************************ 00:05:51.812 11:17:34 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:51.812 [2024-11-15 11:17:34.572007] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:51.812 [2024-11-15 11:17:34.572302] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:05:51.812 [2024-11-15 11:17:34.758829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.070 [2024-11-15 11:17:34.883416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.453 test_start 00:05:53.453 oneshot 00:05:53.453 tick 100 00:05:53.453 tick 100 00:05:53.453 tick 250 00:05:53.453 tick 100 00:05:53.453 tick 100 00:05:53.453 tick 100 00:05:53.453 tick 250 00:05:53.453 tick 500 00:05:53.453 tick 100 00:05:53.453 tick 100 00:05:53.453 tick 250 00:05:53.453 tick 100 00:05:53.453 tick 100 00:05:53.453 test_end 00:05:53.453 00:05:53.453 real 0m1.587s 00:05:53.453 user 0m1.371s 00:05:53.453 sys 0m0.107s 00:05:53.453 ************************************ 00:05:53.453 END TEST event_reactor 00:05:53.453 ************************************ 00:05:53.453 11:17:36 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:53.453 11:17:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.453 11:17:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.453 11:17:36 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:53.453 11:17:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:53.453 11:17:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.453 ************************************ 00:05:53.453 START TEST event_reactor_perf 00:05:53.453 ************************************ 00:05:53.453 11:17:36 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.453 [2024-11-15 11:17:36.199836] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:53.453 [2024-11-15 11:17:36.200020] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59191 ] 00:05:53.453 [2024-11-15 11:17:36.381524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.711 [2024-11-15 11:17:36.509291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.089 test_start 00:05:55.089 test_end 00:05:55.089 Performance: 298507 events per second 00:05:55.089 00:05:55.089 real 0m1.587s 00:05:55.089 user 0m1.370s 00:05:55.089 sys 0m0.108s 00:05:55.089 11:17:37 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.089 ************************************ 00:05:55.089 END TEST event_reactor_perf 00:05:55.089 ************************************ 00:05:55.089 11:17:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.089 11:17:37 event -- event/event.sh@49 -- # uname -s 00:05:55.089 11:17:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.089 11:17:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.089 11:17:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.089 11:17:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.089 11:17:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.089 ************************************ 00:05:55.089 START TEST event_scheduler 00:05:55.089 ************************************ 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.089 * Looking for test storage... 00:05:55.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.089 11:17:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.089 --rc genhtml_branch_coverage=1 00:05:55.089 --rc genhtml_function_coverage=1 00:05:55.089 --rc genhtml_legend=1 00:05:55.089 --rc geninfo_all_blocks=1 00:05:55.089 --rc geninfo_unexecuted_blocks=1 00:05:55.089 00:05:55.089 ' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.089 --rc genhtml_branch_coverage=1 00:05:55.089 --rc genhtml_function_coverage=1 00:05:55.089 --rc genhtml_legend=1 00:05:55.089 --rc geninfo_all_blocks=1 00:05:55.089 --rc geninfo_unexecuted_blocks=1 00:05:55.089 00:05:55.089 ' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.089 --rc genhtml_branch_coverage=1 00:05:55.089 --rc genhtml_function_coverage=1 00:05:55.089 --rc genhtml_legend=1 00:05:55.089 --rc geninfo_all_blocks=1 00:05:55.089 --rc geninfo_unexecuted_blocks=1 00:05:55.089 00:05:55.089 ' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.089 --rc genhtml_branch_coverage=1 00:05:55.089 --rc genhtml_function_coverage=1 00:05:55.089 --rc genhtml_legend=1 00:05:55.089 --rc geninfo_all_blocks=1 00:05:55.089 --rc geninfo_unexecuted_blocks=1 00:05:55.089 00:05:55.089 ' 00:05:55.089 11:17:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.089 11:17:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59267 00:05:55.089 11:17:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.089 11:17:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.089 11:17:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59267 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59267 ']' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.089 11:17:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.348 [2024-11-15 11:17:38.069024] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:05:55.348 [2024-11-15 11:17:38.069199] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:05:55.348 [2024-11-15 11:17:38.251972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.607 [2024-11-15 11:17:38.420717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.607 [2024-11-15 11:17:38.420849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.607 [2024-11-15 11:17:38.420955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.607 [2024-11-15 11:17:38.420962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.174 11:17:39 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.174 11:17:39 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:56.174 11:17:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.174 11:17:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.174 11:17:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.174 POWER: Cannot set governor of lcore 0 to performance 00:05:56.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.174 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:56.174 POWER: Cannot set governor of lcore 0 to userspace 00:05:56.174 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:56.174 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:56.174 POWER: Unable to set Power Management Environment for lcore 0 00:05:56.174 [2024-11-15 11:17:39.102964] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:56.175 [2024-11-15 11:17:39.102997] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:56.175 [2024-11-15 11:17:39.103012] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.175 [2024-11-15 11:17:39.103082] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.175 [2024-11-15 11:17:39.103104] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.175 [2024-11-15 11:17:39.103120] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.175 11:17:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.175 11:17:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.175 11:17:39 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.175 11:17:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.742 [2024-11-15 11:17:39.454710] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.742 11:17:39 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.742 11:17:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.742 11:17:39 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.742 11:17:39 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.742 11:17:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.742 ************************************ 00:05:56.742 START TEST scheduler_create_thread 00:05:56.742 ************************************ 00:05:56.742 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:56.742 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.742 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.742 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.742 2 00:05:56.742 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 3 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 4 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 5 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 6 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 7 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 8 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 9 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 10 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.743 11:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.145 11:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.145 11:17:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.145 11:17:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.145 11:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.145 11:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.522 11:17:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.522 00:05:59.522 real 0m2.619s 00:05:59.522 user 0m0.022s 00:05:59.522 sys 0m0.006s 00:05:59.522 ************************************ 00:05:59.522 END TEST scheduler_create_thread 00:05:59.522 11:17:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.522 11:17:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.522 ************************************ 00:05:59.522 11:17:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.522 11:17:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59267 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59267 ']' 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59267 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59267 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:59.522 killing process with pid 59267 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59267' 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59267 00:05:59.522 11:17:42 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59267 00:05:59.781 [2024-11-15 11:17:42.565531] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:00.790 00:06:00.790 real 0m5.884s 00:06:00.790 user 0m10.399s 00:06:00.790 sys 0m0.542s 00:06:00.790 11:17:43 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.790 11:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:00.790 ************************************ 00:06:00.790 END TEST event_scheduler 00:06:00.790 ************************************ 00:06:01.049 11:17:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:01.049 11:17:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:01.049 11:17:43 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.049 11:17:43 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.049 11:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.049 ************************************ 00:06:01.049 START TEST app_repeat 00:06:01.049 ************************************ 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59379 00:06:01.049 Process app_repeat pid: 59379 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59379' 00:06:01.049 spdk_app_start Round 0 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:01.049 11:17:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59379 /var/tmp/spdk-nbd.sock 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59379 ']' 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:01.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:01.049 11:17:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.049 [2024-11-15 11:17:43.797772] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:01.049 [2024-11-15 11:17:43.797932] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:06:01.049 [2024-11-15 11:17:43.974366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.308 [2024-11-15 11:17:44.114402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.308 [2024-11-15 11:17:44.114414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.244 11:17:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:02.244 11:17:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:02.244 11:17:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.244 Malloc0 00:06:02.502 11:17:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.762 Malloc1 00:06:02.762 11:17:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.762 11:17:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:03.021 /dev/nbd0 00:06:03.021 11:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:03.021 11:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.021 1+0 records in 00:06:03.021 1+0 records out 00:06:03.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032254 s, 12.7 MB/s 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.021 11:17:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.021 11:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.021 11:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.021 11:17:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.280 /dev/nbd1 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.540 1+0 records in 00:06:03.540 1+0 records out 00:06:03.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413715 s, 9.9 MB/s 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:03.540 11:17:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.540 11:17:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.799 { 00:06:03.799 "nbd_device": "/dev/nbd0", 00:06:03.799 "bdev_name": "Malloc0" 00:06:03.799 }, 00:06:03.799 { 00:06:03.799 "nbd_device": "/dev/nbd1", 00:06:03.799 "bdev_name": "Malloc1" 00:06:03.799 } 00:06:03.799 ]' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.799 { 00:06:03.799 "nbd_device": "/dev/nbd0", 00:06:03.799 "bdev_name": "Malloc0" 00:06:03.799 }, 00:06:03.799 { 00:06:03.799 "nbd_device": "/dev/nbd1", 00:06:03.799 "bdev_name": "Malloc1" 00:06:03.799 } 00:06:03.799 ]' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.799 /dev/nbd1' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.799 /dev/nbd1' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.799 11:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.800 256+0 records in 00:06:03.800 256+0 records out 00:06:03.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104995 s, 99.9 MB/s 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.800 256+0 records in 00:06:03.800 256+0 records out 00:06:03.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272039 s, 38.5 MB/s 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.800 11:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:04.059 256+0 records in 00:06:04.059 256+0 records out 00:06:04.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347404 s, 30.2 MB/s 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.059 11:17:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.317 11:17:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.576 11:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.834 11:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.834 11:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.834 11:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.093 11:17:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.093 11:17:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.351 11:17:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.728 [2024-11-15 11:17:49.384236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.728 [2024-11-15 11:17:49.523677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.728 [2024-11-15 11:17:49.523692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.985 [2024-11-15 11:17:49.727440] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.985 [2024-11-15 11:17:49.727535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:08.365 spdk_app_start Round 1 00:06:08.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.365 11:17:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.365 11:17:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:08.365 11:17:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59379 /var/tmp/spdk-nbd.sock 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59379 ']' 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.365 11:17:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.931 11:17:51 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:08.931 11:17:51 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:08.931 11:17:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.189 Malloc0 00:06:09.189 11:17:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.446 Malloc1 00:06:09.446 11:17:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.446 11:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.011 /dev/nbd0 00:06:10.011 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.011 11:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.011 1+0 records in 00:06:10.011 1+0 records out 00:06:10.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272734 s, 15.0 MB/s 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:10.011 11:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:10.011 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.011 11:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.011 11:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.269 /dev/nbd1 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.269 1+0 records in 00:06:10.269 1+0 records out 00:06:10.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377305 s, 10.9 MB/s 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:10.269 11:17:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.269 11:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.528 11:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.528 { 00:06:10.528 "nbd_device": "/dev/nbd0", 00:06:10.528 "bdev_name": "Malloc0" 00:06:10.528 }, 00:06:10.528 { 00:06:10.528 "nbd_device": "/dev/nbd1", 00:06:10.528 "bdev_name": "Malloc1" 00:06:10.528 } 00:06:10.528 ]' 00:06:10.528 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.528 { 00:06:10.528 "nbd_device": "/dev/nbd0", 00:06:10.528 "bdev_name": "Malloc0" 00:06:10.528 }, 00:06:10.528 { 00:06:10.528 "nbd_device": "/dev/nbd1", 00:06:10.528 "bdev_name": "Malloc1" 00:06:10.528 } 00:06:10.528 ]' 00:06:10.528 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.786 /dev/nbd1' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.786 /dev/nbd1' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.786 256+0 records in 00:06:10.786 256+0 records out 00:06:10.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00950813 s, 110 MB/s 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.786 256+0 records in 00:06:10.786 256+0 records out 00:06:10.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262459 s, 40.0 MB/s 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.786 256+0 records in 00:06:10.786 256+0 records out 00:06:10.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0405951 s, 25.8 MB/s 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.786 11:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.045 11:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.610 11:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.867 11:17:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.867 11:17:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.433 11:17:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:13.368 [2024-11-15 11:17:56.165785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.368 [2024-11-15 11:17:56.297583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.368 [2024-11-15 11:17:56.297589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.626 [2024-11-15 11:17:56.504684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.626 [2024-11-15 11:17:56.504791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.572 spdk_app_start Round 2 00:06:15.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.572 11:17:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.572 11:17:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.572 11:17:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59379 /var/tmp/spdk-nbd.sock 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59379 ']' 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.572 11:17:58 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:15.572 11:17:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.138 Malloc0 00:06:16.138 11:17:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.396 Malloc1 00:06:16.396 11:17:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.396 11:17:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.655 /dev/nbd0 00:06:16.655 11:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.655 11:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.655 1+0 records in 00:06:16.655 1+0 records out 00:06:16.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327416 s, 12.5 MB/s 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:16.655 11:17:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.913 11:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:16.913 11:17:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:16.913 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.913 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.913 11:17:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.172 /dev/nbd1 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.172 1+0 records in 00:06:17.172 1+0 records out 00:06:17.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253786 s, 16.1 MB/s 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:17.172 11:17:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.172 11:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.431 11:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.431 { 00:06:17.431 "nbd_device": "/dev/nbd0", 00:06:17.431 "bdev_name": "Malloc0" 00:06:17.431 }, 00:06:17.431 { 00:06:17.431 "nbd_device": "/dev/nbd1", 00:06:17.431 "bdev_name": "Malloc1" 00:06:17.431 } 00:06:17.431 ]' 00:06:17.431 11:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.431 { 00:06:17.431 "nbd_device": "/dev/nbd0", 00:06:17.431 "bdev_name": "Malloc0" 00:06:17.431 }, 00:06:17.431 { 00:06:17.431 "nbd_device": "/dev/nbd1", 00:06:17.431 "bdev_name": "Malloc1" 00:06:17.431 } 00:06:17.431 ]' 00:06:17.431 11:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.690 /dev/nbd1' 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.690 /dev/nbd1' 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.690 256+0 records in 00:06:17.690 256+0 records out 00:06:17.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0082222 s, 128 MB/s 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.690 256+0 records in 00:06:17.690 256+0 records out 00:06:17.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310054 s, 33.8 MB/s 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.690 256+0 records in 00:06:17.690 256+0 records out 00:06:17.690 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363352 s, 28.9 MB/s 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.690 11:18:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.691 11:18:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.949 11:18:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.206 11:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.770 11:18:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.770 11:18:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.335 11:18:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.270 [2024-11-15 11:18:03.178319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.528 [2024-11-15 11:18:03.313217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.528 [2024-11-15 11:18:03.313229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.787 [2024-11-15 11:18:03.511592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.787 [2024-11-15 11:18:03.511771] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:22.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.191 11:18:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59379 /var/tmp/spdk-nbd.sock 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59379 ']' 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.191 11:18:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:22.450 11:18:05 event.app_repeat -- event/event.sh@39 -- # killprocess 59379 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59379 ']' 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59379 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59379 00:06:22.450 killing process with pid 59379 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59379' 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59379 00:06:22.450 11:18:05 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59379 00:06:23.387 spdk_app_start is called in Round 0. 00:06:23.387 Shutdown signal received, stop current app iteration 00:06:23.387 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:23.387 spdk_app_start is called in Round 1. 00:06:23.387 Shutdown signal received, stop current app iteration 00:06:23.387 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:23.387 spdk_app_start is called in Round 2. 00:06:23.387 Shutdown signal received, stop current app iteration 00:06:23.387 Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 reinitialization... 00:06:23.387 spdk_app_start is called in Round 3. 00:06:23.387 Shutdown signal received, stop current app iteration 00:06:23.645 11:18:06 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:23.645 ************************************ 00:06:23.645 END TEST app_repeat 00:06:23.645 ************************************ 00:06:23.645 11:18:06 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:23.645 00:06:23.645 real 0m22.607s 00:06:23.645 user 0m50.445s 00:06:23.645 sys 0m3.328s 00:06:23.645 11:18:06 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:23.645 11:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.645 11:18:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:23.645 11:18:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:23.645 11:18:06 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.646 11:18:06 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.646 11:18:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.646 ************************************ 00:06:23.646 START TEST cpu_locks 00:06:23.646 ************************************ 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:23.646 * Looking for test storage... 00:06:23.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.646 11:18:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:23.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.646 --rc genhtml_branch_coverage=1 00:06:23.646 --rc genhtml_function_coverage=1 00:06:23.646 --rc genhtml_legend=1 00:06:23.646 --rc geninfo_all_blocks=1 00:06:23.646 --rc geninfo_unexecuted_blocks=1 00:06:23.646 00:06:23.646 ' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:23.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.646 --rc genhtml_branch_coverage=1 00:06:23.646 --rc genhtml_function_coverage=1 00:06:23.646 --rc genhtml_legend=1 00:06:23.646 --rc geninfo_all_blocks=1 00:06:23.646 --rc geninfo_unexecuted_blocks=1 00:06:23.646 00:06:23.646 ' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:23.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.646 --rc genhtml_branch_coverage=1 00:06:23.646 --rc genhtml_function_coverage=1 00:06:23.646 --rc genhtml_legend=1 00:06:23.646 --rc geninfo_all_blocks=1 00:06:23.646 --rc geninfo_unexecuted_blocks=1 00:06:23.646 00:06:23.646 ' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:23.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.646 --rc genhtml_branch_coverage=1 00:06:23.646 --rc genhtml_function_coverage=1 00:06:23.646 --rc genhtml_legend=1 00:06:23.646 --rc geninfo_all_blocks=1 00:06:23.646 --rc geninfo_unexecuted_blocks=1 00:06:23.646 00:06:23.646 ' 00:06:23.646 11:18:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.646 11:18:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.646 11:18:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.646 11:18:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:23.646 11:18:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.904 ************************************ 00:06:23.904 START TEST default_locks 00:06:23.904 ************************************ 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59859 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59859 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59859 ']' 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:23.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:23.904 11:18:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.904 [2024-11-15 11:18:06.732696] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:23.904 [2024-11-15 11:18:06.732902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59859 ] 00:06:24.162 [2024-11-15 11:18:06.917792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.162 [2024-11-15 11:18:07.055414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.097 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.097 11:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:25.097 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59859 00:06:25.097 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59859 00:06:25.097 11:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59859 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59859 ']' 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59859 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59859 00:06:25.663 killing process with pid 59859 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59859' 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59859 00:06:25.663 11:18:08 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59859 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59859 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59859 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59859 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59859 ']' 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.567 ERROR: process (pid: 59859) is no longer running 00:06:27.567 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59859) - No such process 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.567 00:06:27.567 real 0m3.891s 00:06:27.567 user 0m3.834s 00:06:27.567 sys 0m0.783s 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.567 11:18:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.567 ************************************ 00:06:27.567 END TEST default_locks 00:06:27.567 ************************************ 00:06:27.826 11:18:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.826 11:18:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.826 11:18:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.826 11:18:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.826 ************************************ 00:06:27.826 START TEST default_locks_via_rpc 00:06:27.826 ************************************ 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59934 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59934 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59934 ']' 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.826 11:18:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.826 [2024-11-15 11:18:10.676990] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:27.826 [2024-11-15 11:18:10.677499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59934 ] 00:06:28.085 [2024-11-15 11:18:10.864878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.085 [2024-11-15 11:18:10.992062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.028 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.029 11:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.029 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59934 00:06:29.029 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59934 00:06:29.029 11:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59934 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59934 ']' 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59934 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59934 00:06:29.594 killing process with pid 59934 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59934' 00:06:29.594 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59934 00:06:29.595 11:18:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59934 00:06:31.495 ************************************ 00:06:31.495 END TEST default_locks_via_rpc 00:06:31.495 00:06:31.496 real 0m3.833s 00:06:31.496 user 0m3.832s 00:06:31.496 sys 0m0.770s 00:06:31.496 11:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.496 11:18:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.496 ************************************ 00:06:31.496 11:18:14 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:31.496 11:18:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:31.496 11:18:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.496 11:18:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.496 ************************************ 00:06:31.496 START TEST non_locking_app_on_locked_coremask 00:06:31.496 ************************************ 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60010 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60010 /var/tmp/spdk.sock 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60010 ']' 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.496 11:18:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.754 [2024-11-15 11:18:14.616579] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:31.754 [2024-11-15 11:18:14.616838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:06:32.012 [2024-11-15 11:18:14.806021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.012 [2024-11-15 11:18:14.928643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60026 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60026 /var/tmp/spdk2.sock 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60026 ']' 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:32.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:32.957 11:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.957 [2024-11-15 11:18:15.887748] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:32.957 [2024-11-15 11:18:15.887940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60026 ] 00:06:33.216 [2024-11-15 11:18:16.089696] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.216 [2024-11-15 11:18:16.089766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.475 [2024-11-15 11:18:16.336642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.057 11:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.057 11:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:36.057 11:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60010 00:06:36.057 11:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60010 00:06:36.057 11:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60010 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60010 ']' 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60010 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60010 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.622 killing process with pid 60010 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60010' 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60010 00:06:36.622 11:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60010 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60026 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60026 ']' 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60026 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60026 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.885 killing process with pid 60026 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60026' 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60026 00:06:41.885 11:18:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60026 00:06:43.827 00:06:43.827 real 0m12.030s 00:06:43.827 user 0m12.448s 00:06:43.827 sys 0m1.688s 00:06:43.827 11:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.827 11:18:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.827 ************************************ 00:06:43.827 END TEST non_locking_app_on_locked_coremask 00:06:43.827 ************************************ 00:06:43.827 11:18:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.827 11:18:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.827 11:18:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.827 11:18:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.827 ************************************ 00:06:43.827 START TEST locking_app_on_unlocked_coremask 00:06:43.827 ************************************ 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60183 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60183 /var/tmp/spdk.sock 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60183 ']' 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.827 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.828 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.828 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.828 11:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.828 [2024-11-15 11:18:26.648822] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:43.828 [2024-11-15 11:18:26.649002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60183 ] 00:06:44.085 [2024-11-15 11:18:26.848272] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.086 [2024-11-15 11:18:26.848345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.086 [2024-11-15 11:18:27.016837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60204 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60204 /var/tmp/spdk2.sock 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60204 ']' 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.021 11:18:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.279 [2024-11-15 11:18:28.095202] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:45.279 [2024-11-15 11:18:28.095372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60204 ] 00:06:45.536 [2024-11-15 11:18:28.309284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.794 [2024-11-15 11:18:28.615767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.338 11:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.338 11:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:48.338 11:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60204 00:06:48.338 11:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60204 00:06:48.338 11:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60183 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60183 ']' 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60183 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60183 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.274 killing process with pid 60183 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60183' 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60183 00:06:49.274 11:18:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60183 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60204 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60204 ']' 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60204 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60204 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:54.538 killing process with pid 60204 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60204' 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60204 00:06:54.538 11:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60204 00:06:55.915 00:06:55.915 real 0m12.332s 00:06:55.915 user 0m13.064s 00:06:55.915 sys 0m1.618s 00:06:55.915 11:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.915 ************************************ 00:06:55.915 END TEST locking_app_on_unlocked_coremask 00:06:55.915 ************************************ 00:06:55.915 11:18:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.175 11:18:38 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:56.175 11:18:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:56.175 11:18:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.175 11:18:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.175 ************************************ 00:06:56.175 START TEST locking_app_on_locked_coremask 00:06:56.175 ************************************ 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60358 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60358 /var/tmp/spdk.sock 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60358 ']' 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.175 11:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.175 [2024-11-15 11:18:39.044091] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:56.175 [2024-11-15 11:18:39.044308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60358 ] 00:06:56.434 [2024-11-15 11:18:39.236773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.434 [2024-11-15 11:18:39.371345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60376 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60376 /var/tmp/spdk2.sock 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60376 /var/tmp/spdk2.sock 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60376 /var/tmp/spdk2.sock 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60376 ']' 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:57.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:57.371 11:18:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.629 [2024-11-15 11:18:40.387924] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:06:57.629 [2024-11-15 11:18:40.388135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:06:57.888 [2024-11-15 11:18:40.587299] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60358 has claimed it. 00:06:57.888 [2024-11-15 11:18:40.587392] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.146 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60376) - No such process 00:06:58.146 ERROR: process (pid: 60376) is no longer running 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60358 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60358 00:06:58.146 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60358 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60358 ']' 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60358 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60358 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:58.712 killing process with pid 60358 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60358' 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60358 00:06:58.712 11:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60358 00:07:01.240 00:07:01.240 real 0m4.900s 00:07:01.240 user 0m5.177s 00:07:01.240 sys 0m0.973s 00:07:01.240 11:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.240 ************************************ 00:07:01.240 END TEST locking_app_on_locked_coremask 00:07:01.240 ************************************ 00:07:01.240 11:18:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.240 11:18:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.240 11:18:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:01.240 11:18:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.240 11:18:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.240 ************************************ 00:07:01.240 START TEST locking_overlapped_coremask 00:07:01.240 ************************************ 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60450 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60450 /var/tmp/spdk.sock 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60450 ']' 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.240 11:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.240 [2024-11-15 11:18:44.000467] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:01.240 [2024-11-15 11:18:44.000678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60450 ] 00:07:01.499 [2024-11-15 11:18:44.189356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.499 [2024-11-15 11:18:44.327018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.499 [2024-11-15 11:18:44.327173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.499 [2024-11-15 11:18:44.327200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60469 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60469 /var/tmp/spdk2.sock 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60469 /var/tmp/spdk2.sock 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60469 /var/tmp/spdk2.sock 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60469 ']' 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.480 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.480 [2024-11-15 11:18:45.306890] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:02.480 [2024-11-15 11:18:45.307101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60469 ] 00:07:02.738 [2024-11-15 11:18:45.506486] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60450 has claimed it. 00:07:02.738 [2024-11-15 11:18:45.506569] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.304 ERROR: process (pid: 60469) is no longer running 00:07:03.304 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60469) - No such process 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60450 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60450 ']' 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60450 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:03.304 11:18:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60450 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60450' 00:07:03.304 killing process with pid 60450 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60450 00:07:03.304 11:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60450 00:07:05.832 00:07:05.832 real 0m4.501s 00:07:05.832 user 0m12.134s 00:07:05.832 sys 0m0.674s 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 ************************************ 00:07:05.832 END TEST locking_overlapped_coremask 00:07:05.832 ************************************ 00:07:05.832 11:18:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:05.832 11:18:48 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.832 11:18:48 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.832 11:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 ************************************ 00:07:05.832 START TEST locking_overlapped_coremask_via_rpc 00:07:05.832 ************************************ 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60533 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60533 /var/tmp/spdk.sock 00:07:05.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60533 ']' 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.832 11:18:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.832 [2024-11-15 11:18:48.569863] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:05.832 [2024-11-15 11:18:48.570586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60533 ] 00:07:05.832 [2024-11-15 11:18:48.770969] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.832 [2024-11-15 11:18:48.771330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.091 [2024-11-15 11:18:48.923003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.091 [2024-11-15 11:18:48.923144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.091 [2024-11-15 11:18:48.923144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60551 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60551 /var/tmp/spdk2.sock 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60551 ']' 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.027 11:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.027 [2024-11-15 11:18:49.935347] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:07.027 [2024-11-15 11:18:49.935773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60551 ] 00:07:07.285 [2024-11-15 11:18:50.138349] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.285 [2024-11-15 11:18:50.138413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.542 [2024-11-15 11:18:50.409601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.542 [2024-11-15 11:18:50.409660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.542 [2024-11-15 11:18:50.409667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.069 [2024-11-15 11:18:52.749272] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60533 has claimed it. 00:07:10.069 request: 00:07:10.069 { 00:07:10.069 "method": "framework_enable_cpumask_locks", 00:07:10.069 "req_id": 1 00:07:10.069 } 00:07:10.069 Got JSON-RPC error response 00:07:10.069 response: 00:07:10.069 { 00:07:10.069 "code": -32603, 00:07:10.069 "message": "Failed to claim CPU core: 2" 00:07:10.069 } 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60533 /var/tmp/spdk.sock 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60533 ']' 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.069 11:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60551 /var/tmp/spdk2.sock 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60551 ']' 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:10.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:10.330 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 ************************************ 00:07:10.587 END TEST locking_overlapped_coremask_via_rpc 00:07:10.587 ************************************ 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.587 00:07:10.587 real 0m4.994s 00:07:10.587 user 0m1.954s 00:07:10.587 sys 0m0.253s 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.587 11:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.587 11:18:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.587 11:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60533 ]] 00:07:10.587 11:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60533 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60533 ']' 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60533 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60533 00:07:10.587 killing process with pid 60533 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60533' 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60533 00:07:10.587 11:18:53 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60533 00:07:13.116 11:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60551 ]] 00:07:13.116 11:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60551 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60551 ']' 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60551 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60551 00:07:13.116 killing process with pid 60551 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60551' 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60551 00:07:13.116 11:18:55 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60551 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60533 ]] 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60533 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60533 ']' 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60533 00:07:15.641 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60533) - No such process 00:07:15.641 Process with pid 60533 is not found 00:07:15.641 Process with pid 60551 is not found 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60533 is not found' 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60551 ]] 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60551 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60551 ']' 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60551 00:07:15.641 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60551) - No such process 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60551 is not found' 00:07:15.641 11:18:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:15.641 00:07:15.641 real 0m51.618s 00:07:15.641 user 1m29.066s 00:07:15.641 sys 0m8.049s 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.641 11:18:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 ************************************ 00:07:15.641 END TEST cpu_locks 00:07:15.641 ************************************ 00:07:15.641 00:07:15.641 real 1m25.358s 00:07:15.641 user 2m37.209s 00:07:15.642 sys 0m12.503s 00:07:15.642 11:18:58 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.642 ************************************ 00:07:15.642 END TEST event 00:07:15.642 ************************************ 00:07:15.642 11:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 11:18:58 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:15.642 11:18:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.642 11:18:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.642 11:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 START TEST thread 00:07:15.642 ************************************ 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:15.642 * Looking for test storage... 00:07:15.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:15.642 11:18:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.642 11:18:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.642 11:18:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.642 11:18:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.642 11:18:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.642 11:18:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.642 11:18:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.642 11:18:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.642 11:18:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.642 11:18:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.642 11:18:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.642 11:18:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:15.642 11:18:58 thread -- scripts/common.sh@345 -- # : 1 00:07:15.642 11:18:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.642 11:18:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.642 11:18:58 thread -- scripts/common.sh@365 -- # decimal 1 00:07:15.642 11:18:58 thread -- scripts/common.sh@353 -- # local d=1 00:07:15.642 11:18:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.642 11:18:58 thread -- scripts/common.sh@355 -- # echo 1 00:07:15.642 11:18:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.642 11:18:58 thread -- scripts/common.sh@366 -- # decimal 2 00:07:15.642 11:18:58 thread -- scripts/common.sh@353 -- # local d=2 00:07:15.642 11:18:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.642 11:18:58 thread -- scripts/common.sh@355 -- # echo 2 00:07:15.642 11:18:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.642 11:18:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.642 11:18:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.642 11:18:58 thread -- scripts/common.sh@368 -- # return 0 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.642 --rc genhtml_branch_coverage=1 00:07:15.642 --rc genhtml_function_coverage=1 00:07:15.642 --rc genhtml_legend=1 00:07:15.642 --rc geninfo_all_blocks=1 00:07:15.642 --rc geninfo_unexecuted_blocks=1 00:07:15.642 00:07:15.642 ' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.642 --rc genhtml_branch_coverage=1 00:07:15.642 --rc genhtml_function_coverage=1 00:07:15.642 --rc genhtml_legend=1 00:07:15.642 --rc geninfo_all_blocks=1 00:07:15.642 --rc geninfo_unexecuted_blocks=1 00:07:15.642 00:07:15.642 ' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.642 --rc genhtml_branch_coverage=1 00:07:15.642 --rc genhtml_function_coverage=1 00:07:15.642 --rc genhtml_legend=1 00:07:15.642 --rc geninfo_all_blocks=1 00:07:15.642 --rc geninfo_unexecuted_blocks=1 00:07:15.642 00:07:15.642 ' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:15.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.642 --rc genhtml_branch_coverage=1 00:07:15.642 --rc genhtml_function_coverage=1 00:07:15.642 --rc genhtml_legend=1 00:07:15.642 --rc geninfo_all_blocks=1 00:07:15.642 --rc geninfo_unexecuted_blocks=1 00:07:15.642 00:07:15.642 ' 00:07:15.642 11:18:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.642 11:18:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.642 ************************************ 00:07:15.642 START TEST thread_poller_perf 00:07:15.642 ************************************ 00:07:15.642 11:18:58 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:15.642 [2024-11-15 11:18:58.350384] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:15.642 [2024-11-15 11:18:58.350779] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60752 ] 00:07:15.642 [2024-11-15 11:18:58.529996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.901 [2024-11-15 11:18:58.708979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.901 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:17.275 [2024-11-15T11:19:00.224Z] ====================================== 00:07:17.275 [2024-11-15T11:19:00.224Z] busy:2212099080 (cyc) 00:07:17.275 [2024-11-15T11:19:00.224Z] total_run_count: 295000 00:07:17.275 [2024-11-15T11:19:00.224Z] tsc_hz: 2200000000 (cyc) 00:07:17.275 [2024-11-15T11:19:00.224Z] ====================================== 00:07:17.275 [2024-11-15T11:19:00.224Z] poller_cost: 7498 (cyc), 3408 (nsec) 00:07:17.275 00:07:17.275 real 0m1.624s 00:07:17.275 user 0m1.406s 00:07:17.275 sys 0m0.106s 00:07:17.275 11:18:59 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.275 ************************************ 00:07:17.275 END TEST thread_poller_perf 00:07:17.275 11:18:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 11:18:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:17.275 11:18:59 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:17.275 11:18:59 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.275 11:18:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 ************************************ 00:07:17.275 START TEST thread_poller_perf 00:07:17.275 ************************************ 00:07:17.275 11:18:59 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:17.275 [2024-11-15 11:19:00.040330] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:17.275 [2024-11-15 11:19:00.040738] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60788 ] 00:07:17.534 [2024-11-15 11:19:00.223285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.534 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:17.534 [2024-11-15 11:19:00.350748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.909 [2024-11-15T11:19:01.858Z] ====================================== 00:07:18.909 [2024-11-15T11:19:01.858Z] busy:2204073548 (cyc) 00:07:18.909 [2024-11-15T11:19:01.858Z] total_run_count: 3845000 00:07:18.909 [2024-11-15T11:19:01.858Z] tsc_hz: 2200000000 (cyc) 00:07:18.909 [2024-11-15T11:19:01.858Z] ====================================== 00:07:18.909 [2024-11-15T11:19:01.858Z] poller_cost: 573 (cyc), 260 (nsec) 00:07:18.909 00:07:18.909 real 0m1.592s 00:07:18.909 user 0m1.376s 00:07:18.909 sys 0m0.106s 00:07:18.909 ************************************ 00:07:18.909 END TEST thread_poller_perf 00:07:18.909 ************************************ 00:07:18.909 11:19:01 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.909 11:19:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 11:19:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:18.909 ************************************ 00:07:18.909 END TEST thread 00:07:18.909 ************************************ 00:07:18.909 00:07:18.909 real 0m3.506s 00:07:18.909 user 0m2.935s 00:07:18.909 sys 0m0.339s 00:07:18.909 11:19:01 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.909 11:19:01 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 11:19:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:18.909 11:19:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:18.909 11:19:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:18.909 11:19:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.909 11:19:01 -- common/autotest_common.sh@10 -- # set +x 00:07:18.909 ************************************ 00:07:18.909 START TEST app_cmdline 00:07:18.909 ************************************ 00:07:18.909 11:19:01 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:18.909 * Looking for test storage... 00:07:18.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:18.909 11:19:01 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.909 11:19:01 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.909 11:19:01 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.909 11:19:01 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.909 11:19:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.910 11:19:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.910 --rc geninfo_unexecuted_blocks=1 00:07:18.910 00:07:18.910 ' 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.910 --rc geninfo_unexecuted_blocks=1 00:07:18.910 00:07:18.910 ' 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.910 --rc geninfo_unexecuted_blocks=1 00:07:18.910 00:07:18.910 ' 00:07:18.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.910 --rc genhtml_branch_coverage=1 00:07:18.910 --rc genhtml_function_coverage=1 00:07:18.910 --rc genhtml_legend=1 00:07:18.910 --rc geninfo_all_blocks=1 00:07:18.910 --rc geninfo_unexecuted_blocks=1 00:07:18.910 00:07:18.910 ' 00:07:18.910 11:19:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:18.910 11:19:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60877 00:07:18.910 11:19:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60877 00:07:18.910 11:19:01 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60877 ']' 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.910 11:19:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:19.168 [2024-11-15 11:19:01.981496] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:19.168 [2024-11-15 11:19:01.981906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:07:19.426 [2024-11-15 11:19:02.173169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.426 [2024-11-15 11:19:02.337019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.361 11:19:03 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:20.361 11:19:03 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:20.361 11:19:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:20.619 { 00:07:20.619 "version": "SPDK v25.01-pre git sha1 514198259", 00:07:20.619 "fields": { 00:07:20.619 "major": 25, 00:07:20.619 "minor": 1, 00:07:20.619 "patch": 0, 00:07:20.619 "suffix": "-pre", 00:07:20.619 "commit": "514198259" 00:07:20.619 } 00:07:20.619 } 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:20.619 11:19:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:20.619 11:19:03 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.186 request: 00:07:21.186 { 00:07:21.186 "method": "env_dpdk_get_mem_stats", 00:07:21.186 "req_id": 1 00:07:21.186 } 00:07:21.186 Got JSON-RPC error response 00:07:21.186 response: 00:07:21.186 { 00:07:21.186 "code": -32601, 00:07:21.186 "message": "Method not found" 00:07:21.186 } 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.186 11:19:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60877 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60877 ']' 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60877 00:07:21.186 11:19:03 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60877 00:07:21.187 killing process with pid 60877 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60877' 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@971 -- # kill 60877 00:07:21.187 11:19:03 app_cmdline -- common/autotest_common.sh@976 -- # wait 60877 00:07:23.726 ************************************ 00:07:23.726 END TEST app_cmdline 00:07:23.726 ************************************ 00:07:23.726 00:07:23.726 real 0m4.374s 00:07:23.726 user 0m4.832s 00:07:23.726 sys 0m0.685s 00:07:23.726 11:19:06 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.726 11:19:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.726 11:19:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:23.726 11:19:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:23.726 11:19:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.726 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:07:23.726 ************************************ 00:07:23.726 START TEST version 00:07:23.726 ************************************ 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:23.726 * Looking for test storage... 00:07:23.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:23.726 11:19:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.726 11:19:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.726 11:19:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.726 11:19:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.726 11:19:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.726 11:19:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.726 11:19:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.726 11:19:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.726 11:19:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.726 11:19:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.726 11:19:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.726 11:19:06 version -- scripts/common.sh@344 -- # case "$op" in 00:07:23.726 11:19:06 version -- scripts/common.sh@345 -- # : 1 00:07:23.726 11:19:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.726 11:19:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.726 11:19:06 version -- scripts/common.sh@365 -- # decimal 1 00:07:23.726 11:19:06 version -- scripts/common.sh@353 -- # local d=1 00:07:23.726 11:19:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.726 11:19:06 version -- scripts/common.sh@355 -- # echo 1 00:07:23.726 11:19:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.726 11:19:06 version -- scripts/common.sh@366 -- # decimal 2 00:07:23.726 11:19:06 version -- scripts/common.sh@353 -- # local d=2 00:07:23.726 11:19:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.726 11:19:06 version -- scripts/common.sh@355 -- # echo 2 00:07:23.726 11:19:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.726 11:19:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.726 11:19:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.726 11:19:06 version -- scripts/common.sh@368 -- # return 0 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.726 --rc genhtml_branch_coverage=1 00:07:23.726 --rc genhtml_function_coverage=1 00:07:23.726 --rc genhtml_legend=1 00:07:23.726 --rc geninfo_all_blocks=1 00:07:23.726 --rc geninfo_unexecuted_blocks=1 00:07:23.726 00:07:23.726 ' 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.726 --rc genhtml_branch_coverage=1 00:07:23.726 --rc genhtml_function_coverage=1 00:07:23.726 --rc genhtml_legend=1 00:07:23.726 --rc geninfo_all_blocks=1 00:07:23.726 --rc geninfo_unexecuted_blocks=1 00:07:23.726 00:07:23.726 ' 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.726 --rc genhtml_branch_coverage=1 00:07:23.726 --rc genhtml_function_coverage=1 00:07:23.726 --rc genhtml_legend=1 00:07:23.726 --rc geninfo_all_blocks=1 00:07:23.726 --rc geninfo_unexecuted_blocks=1 00:07:23.726 00:07:23.726 ' 00:07:23.726 11:19:06 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.726 --rc genhtml_branch_coverage=1 00:07:23.726 --rc genhtml_function_coverage=1 00:07:23.726 --rc genhtml_legend=1 00:07:23.726 --rc geninfo_all_blocks=1 00:07:23.726 --rc geninfo_unexecuted_blocks=1 00:07:23.726 00:07:23.726 ' 00:07:23.726 11:19:06 version -- app/version.sh@17 -- # get_header_version major 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # cut -f2 00:07:23.726 11:19:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.726 11:19:06 version -- app/version.sh@17 -- # major=25 00:07:23.726 11:19:06 version -- app/version.sh@18 -- # get_header_version minor 00:07:23.726 11:19:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # cut -f2 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.726 11:19:06 version -- app/version.sh@18 -- # minor=1 00:07:23.726 11:19:06 version -- app/version.sh@19 -- # get_header_version patch 00:07:23.726 11:19:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # cut -f2 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.726 11:19:06 version -- app/version.sh@19 -- # patch=0 00:07:23.726 11:19:06 version -- app/version.sh@20 -- # get_header_version suffix 00:07:23.726 11:19:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # cut -f2 00:07:23.726 11:19:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.726 11:19:06 version -- app/version.sh@20 -- # suffix=-pre 00:07:23.726 11:19:06 version -- app/version.sh@22 -- # version=25.1 00:07:23.726 11:19:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.726 11:19:06 version -- app/version.sh@28 -- # version=25.1rc0 00:07:23.727 11:19:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:23.727 11:19:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.727 11:19:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:23.727 11:19:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:23.727 00:07:23.727 real 0m0.272s 00:07:23.727 user 0m0.177s 00:07:23.727 sys 0m0.126s 00:07:23.727 ************************************ 00:07:23.727 END TEST version 00:07:23.727 ************************************ 00:07:23.727 11:19:06 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.727 11:19:06 version -- common/autotest_common.sh@10 -- # set +x 00:07:23.727 11:19:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:23.727 11:19:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:23.727 11:19:06 -- spdk/autotest.sh@194 -- # uname -s 00:07:23.727 11:19:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:23.727 11:19:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:23.727 11:19:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:23.727 11:19:06 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:23.727 11:19:06 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:23.727 11:19:06 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:23.727 11:19:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.727 11:19:06 -- common/autotest_common.sh@10 -- # set +x 00:07:23.727 ************************************ 00:07:23.727 START TEST blockdev_nvme 00:07:23.727 ************************************ 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:23.727 * Looking for test storage... 00:07:23.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.727 11:19:06 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.727 --rc genhtml_branch_coverage=1 00:07:23.727 --rc genhtml_function_coverage=1 00:07:23.727 --rc genhtml_legend=1 00:07:23.727 --rc geninfo_all_blocks=1 00:07:23.727 --rc geninfo_unexecuted_blocks=1 00:07:23.727 00:07:23.727 ' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.727 --rc genhtml_branch_coverage=1 00:07:23.727 --rc genhtml_function_coverage=1 00:07:23.727 --rc genhtml_legend=1 00:07:23.727 --rc geninfo_all_blocks=1 00:07:23.727 --rc geninfo_unexecuted_blocks=1 00:07:23.727 00:07:23.727 ' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.727 --rc genhtml_branch_coverage=1 00:07:23.727 --rc genhtml_function_coverage=1 00:07:23.727 --rc genhtml_legend=1 00:07:23.727 --rc geninfo_all_blocks=1 00:07:23.727 --rc geninfo_unexecuted_blocks=1 00:07:23.727 00:07:23.727 ' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.727 --rc genhtml_branch_coverage=1 00:07:23.727 --rc genhtml_function_coverage=1 00:07:23.727 --rc genhtml_legend=1 00:07:23.727 --rc geninfo_all_blocks=1 00:07:23.727 --rc geninfo_unexecuted_blocks=1 00:07:23.727 00:07:23.727 ' 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:23.727 11:19:06 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61066 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:23.727 11:19:06 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61066 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61066 ']' 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.727 11:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.985 [2024-11-15 11:19:06.732773] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:23.986 [2024-11-15 11:19:06.733166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61066 ] 00:07:23.986 [2024-11-15 11:19:06.917822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.244 [2024-11-15 11:19:07.036570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.179 11:19:07 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.179 11:19:07 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:25.179 11:19:07 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:25.179 11:19:07 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.179 11:19:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.437 11:19:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:25.437 11:19:08 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.695 11:19:08 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:25.695 11:19:08 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:25.696 11:19:08 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3cfe4627-3501-414d-a430-7fdde4467f73"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3cfe4627-3501-414d-a430-7fdde4467f73",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f42135c2-d25e-406c-a68d-029fb92a5f48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f42135c2-d25e-406c-a68d-029fb92a5f48",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b3f12570-a07a-4ba5-9787-7ac430f08392"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3f12570-a07a-4ba5-9787-7ac430f08392",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4c486107-3763-4e5f-9b2a-ad2c9c7e0ec4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c486107-3763-4e5f-9b2a-ad2c9c7e0ec4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9e88f5d9-83d0-49d6-958e-106f310c408b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9e88f5d9-83d0-49d6-958e-106f310c408b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f4790041-a352-4ebb-af92-fbf5bb545222"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f4790041-a352-4ebb-af92-fbf5bb545222",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:25.696 11:19:08 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:25.696 11:19:08 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:25.696 11:19:08 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:25.696 11:19:08 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61066 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61066 ']' 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61066 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61066 00:07:25.696 killing process with pid 61066 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61066' 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61066 00:07:25.696 11:19:08 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61066 00:07:28.226 11:19:10 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:28.226 11:19:10 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.226 11:19:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:28.226 11:19:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.226 11:19:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.226 ************************************ 00:07:28.226 START TEST bdev_hello_world 00:07:28.226 ************************************ 00:07:28.226 11:19:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:28.226 [2024-11-15 11:19:10.861336] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:28.226 [2024-11-15 11:19:10.861555] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:07:28.226 [2024-11-15 11:19:11.059363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.484 [2024-11-15 11:19:11.217079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.050 [2024-11-15 11:19:11.895538] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:29.050 [2024-11-15 11:19:11.895600] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:29.050 [2024-11-15 11:19:11.895629] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:29.050 [2024-11-15 11:19:11.898909] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:29.050 [2024-11-15 11:19:11.899332] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:29.050 [2024-11-15 11:19:11.899366] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:29.050 [2024-11-15 11:19:11.899570] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:29.050 00:07:29.050 [2024-11-15 11:19:11.899620] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:30.423 00:07:30.423 real 0m2.223s 00:07:30.423 user 0m1.813s 00:07:30.423 sys 0m0.298s 00:07:30.423 ************************************ 00:07:30.423 END TEST bdev_hello_world 00:07:30.423 ************************************ 00:07:30.423 11:19:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.423 11:19:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:30.423 11:19:13 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:30.423 11:19:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:30.423 11:19:13 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.423 11:19:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.423 ************************************ 00:07:30.423 START TEST bdev_bounds 00:07:30.423 ************************************ 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:30.423 Process bdevio pid: 61203 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61203 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61203' 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61203 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61203 ']' 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.423 11:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:30.423 [2024-11-15 11:19:13.129018] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:30.423 [2024-11-15 11:19:13.129241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61203 ] 00:07:30.423 [2024-11-15 11:19:13.319593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.682 [2024-11-15 11:19:13.487810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.682 [2024-11-15 11:19:13.487935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.682 [2024-11-15 11:19:13.487942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.616 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.616 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:31.616 11:19:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:31.616 I/O targets: 00:07:31.616 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:31.616 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:31.616 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.616 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.616 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:31.616 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:31.616 00:07:31.616 00:07:31.616 CUnit - A unit testing framework for C - Version 2.1-3 00:07:31.616 http://cunit.sourceforge.net/ 00:07:31.616 00:07:31.616 00:07:31.616 Suite: bdevio tests on: Nvme3n1 00:07:31.616 Test: blockdev write read block ...passed 00:07:31.616 Test: blockdev write zeroes read block ...passed 00:07:31.616 Test: blockdev write zeroes read no split ...passed 00:07:31.616 Test: blockdev write zeroes read split ...passed 00:07:31.616 Test: blockdev write zeroes read split partial ...passed 00:07:31.616 Test: blockdev reset ...[2024-11-15 11:19:14.393931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:31.616 [2024-11-15 11:19:14.398014] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:31.616 passed 00:07:31.616 Test: blockdev write read 8 blocks ...passed 00:07:31.616 Test: blockdev write read size > 128k ...passed 00:07:31.616 Test: blockdev write read invalid size ...passed 00:07:31.616 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.616 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.616 Test: blockdev write read max offset ...passed 00:07:31.616 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.616 Test: blockdev writev readv 8 blocks ...passed 00:07:31.616 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.616 Test: blockdev writev readv block ...passed 00:07:31.616 Test: blockdev writev readv size > 128k ...passed 00:07:31.616 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.616 Test: blockdev comparev and writev ...[2024-11-15 11:19:14.408339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ca80a000 len:0x1000 00:07:31.616 [2024-11-15 11:19:14.408403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.616 passed 00:07:31.616 Test: blockdev nvme passthru rw ...passed 00:07:31.616 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:19:14.409438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:31.616 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:31.616 [2024-11-15 11:19:14.409610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.616 passed 00:07:31.616 Test: blockdev copy ...passed 00:07:31.616 Suite: bdevio tests on: Nvme2n3 00:07:31.616 Test: blockdev write read block ...passed 00:07:31.616 Test: blockdev write zeroes read block ...passed 00:07:31.616 Test: blockdev write zeroes read no split ...passed 00:07:31.616 Test: blockdev write zeroes read split ...passed 00:07:31.616 Test: blockdev write zeroes read split partial ...passed 00:07:31.616 Test: blockdev reset ...[2024-11-15 11:19:14.489011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:31.616 [2024-11-15 11:19:14.493298] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:31.616 Test: blockdev write read 8 blocks ...uccessful. 00:07:31.616 passed 00:07:31.616 Test: blockdev write read size > 128k ...passed 00:07:31.616 Test: blockdev write read invalid size ...passed 00:07:31.616 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.616 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.616 Test: blockdev write read max offset ...passed 00:07:31.616 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.616 Test: blockdev writev readv 8 blocks ...passed 00:07:31.616 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.616 Test: blockdev writev readv block ...passed 00:07:31.616 Test: blockdev writev readv size > 128k ...passed 00:07:31.616 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.616 Test: blockdev comparev and writev ...[2024-11-15 11:19:14.502068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ada06000 len:0x1000 00:07:31.616 [2024-11-15 11:19:14.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.616 passed 00:07:31.616 Test: blockdev nvme passthru rw ...passed 00:07:31.616 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.616 Test: blockdev nvme admin passthru ...[2024-11-15 11:19:14.502881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.616 [2024-11-15 11:19:14.502933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.616 passed 00:07:31.616 Test: blockdev copy ...passed 00:07:31.616 Suite: bdevio tests on: Nvme2n2 00:07:31.616 Test: blockdev write read block ...passed 00:07:31.616 Test: blockdev write zeroes read block ...passed 00:07:31.616 Test: blockdev write zeroes read no split ...passed 00:07:31.616 Test: blockdev write zeroes read split ...passed 00:07:31.875 Test: blockdev write zeroes read split partial ...passed 00:07:31.875 Test: blockdev reset ...[2024-11-15 11:19:14.576982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:31.875 passed 00:07:31.875 Test: blockdev write read 8 blocks ...[2024-11-15 11:19:14.581304] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:31.875 passed 00:07:31.875 Test: blockdev write read size > 128k ...passed 00:07:31.875 Test: blockdev write read invalid size ...passed 00:07:31.875 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.875 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.875 Test: blockdev write read max offset ...passed 00:07:31.875 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.875 Test: blockdev writev readv 8 blocks ...passed 00:07:31.875 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.875 Test: blockdev writev readv block ...passed 00:07:31.875 Test: blockdev writev readv size > 128k ...passed 00:07:31.875 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.875 Test: blockdev comparev and writev ...[2024-11-15 11:19:14.589658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da83c000 len:0x1000 00:07:31.875 [2024-11-15 11:19:14.589717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.875 passed 00:07:31.875 Test: blockdev nvme passthru rw ...passed 00:07:31.875 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.875 Test: blockdev nvme admin passthru ...[2024-11-15 11:19:14.590561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.875 [2024-11-15 11:19:14.590601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.875 passed 00:07:31.875 Test: blockdev copy ...passed 00:07:31.876 Suite: bdevio tests on: Nvme2n1 00:07:31.876 Test: blockdev write read block ...passed 00:07:31.876 Test: blockdev write zeroes read block ...passed 00:07:31.876 Test: blockdev write zeroes read no split ...passed 00:07:31.876 Test: blockdev write zeroes read split ...passed 00:07:31.876 Test: blockdev write zeroes read split partial ...passed 00:07:31.876 Test: blockdev reset ...[2024-11-15 11:19:14.667151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:31.876 [2024-11-15 11:19:14.671227] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:31.876 00:07:31.876 Test: blockdev write read 8 blocks ...passed 00:07:31.876 Test: blockdev write read size > 128k ...passed 00:07:31.876 Test: blockdev write read invalid size ...passed 00:07:31.876 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.876 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.876 Test: blockdev write read max offset ...passed 00:07:31.876 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.876 Test: blockdev writev readv 8 blocks ...passed 00:07:31.876 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.876 Test: blockdev writev readv block ...passed 00:07:31.876 Test: blockdev writev readv size > 128k ...passed 00:07:31.876 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.876 Test: blockdev comparev and writev ...[2024-11-15 11:19:14.679462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:31.876 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2da838000 len:0x1000 00:07:31.876 [2024-11-15 11:19:14.679644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.876 passed 00:07:31.876 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:19:14.680673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:31.876 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:31.876 [2024-11-15 11:19:14.680845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.876 passed 00:07:31.876 Test: blockdev copy ...passed 00:07:31.876 Suite: bdevio tests on: Nvme1n1 00:07:31.876 Test: blockdev write read block ...passed 00:07:31.876 Test: blockdev write zeroes read block ...passed 00:07:31.876 Test: blockdev write zeroes read no split ...passed 00:07:31.876 Test: blockdev write zeroes read split ...passed 00:07:31.876 Test: blockdev write zeroes read split partial ...passed 00:07:31.876 Test: blockdev reset ...[2024-11-15 11:19:14.753553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:31.876 passed 00:07:31.876 Test: blockdev write read 8 blocks ...[2024-11-15 11:19:14.757155] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:31.876 passed 00:07:31.876 Test: blockdev write read size > 128k ...passed 00:07:31.876 Test: blockdev write read invalid size ...passed 00:07:31.876 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:31.876 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:31.876 Test: blockdev write read max offset ...passed 00:07:31.876 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:31.876 Test: blockdev writev readv 8 blocks ...passed 00:07:31.876 Test: blockdev writev readv 30 x 1block ...passed 00:07:31.876 Test: blockdev writev readv block ...passed 00:07:31.876 Test: blockdev writev readv size > 128k ...passed 00:07:31.876 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:31.876 Test: blockdev comparev and writev ...[2024-11-15 11:19:14.767648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2da834000 len:0x1000 00:07:31.876 [2024-11-15 11:19:14.767709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:31.876 passed 00:07:31.876 Test: blockdev nvme passthru rw ...passed 00:07:31.876 Test: blockdev nvme passthru vendor specific ...passed 00:07:31.876 Test: blockdev nvme admin passthru ...[2024-11-15 11:19:14.768593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:31.876 [2024-11-15 11:19:14.768635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:31.876 passed 00:07:31.876 Test: blockdev copy ...passed 00:07:31.876 Suite: bdevio tests on: Nvme0n1 00:07:31.876 Test: blockdev write read block ...passed 00:07:31.876 Test: blockdev write zeroes read block ...passed 00:07:31.876 Test: blockdev write zeroes read no split ...passed 00:07:31.876 Test: blockdev write zeroes read split ...passed 00:07:32.134 Test: blockdev write zeroes read split partial ...passed 00:07:32.134 Test: blockdev reset ...[2024-11-15 11:19:14.842641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:32.134 passed 00:07:32.134 Test: blockdev write read 8 blocks ...[2024-11-15 11:19:14.846478] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:32.134 passed 00:07:32.134 Test: blockdev write read size > 128k ...passed 00:07:32.134 Test: blockdev write read invalid size ...passed 00:07:32.134 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:32.134 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:32.134 Test: blockdev write read max offset ...passed 00:07:32.134 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:32.134 Test: blockdev writev readv 8 blocks ...passed 00:07:32.134 Test: blockdev writev readv 30 x 1block ...passed 00:07:32.134 Test: blockdev writev readv block ...passed 00:07:32.134 Test: blockdev writev readv size > 128k ...passed 00:07:32.134 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:32.134 Test: blockdev comparev and writev ...passed 00:07:32.134 Test: blockdev nvme passthru rw ...[2024-11-15 11:19:14.853990] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:32.134 separate metadata which is not supported yet. 00:07:32.134 passed 00:07:32.134 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:19:14.854584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:07:32.134 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:32.134 [2024-11-15 11:19:14.854741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:32.134 passed 00:07:32.134 Test: blockdev copy ...passed 00:07:32.134 00:07:32.134 Run Summary: Type Total Ran Passed Failed Inactive 00:07:32.134 suites 6 6 n/a 0 0 00:07:32.135 tests 138 138 138 0 0 00:07:32.135 asserts 893 893 893 0 n/a 00:07:32.135 00:07:32.135 Elapsed time = 1.482 seconds 00:07:32.135 0 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61203 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61203 ']' 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61203 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61203 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61203' 00:07:32.135 killing process with pid 61203 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61203 00:07:32.135 11:19:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61203 00:07:33.068 11:19:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:33.068 00:07:33.068 real 0m2.877s 00:07:33.068 user 0m7.297s 00:07:33.068 sys 0m0.447s 00:07:33.068 11:19:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.068 11:19:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:33.068 ************************************ 00:07:33.068 END TEST bdev_bounds 00:07:33.069 ************************************ 00:07:33.069 11:19:15 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:33.069 11:19:15 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:33.069 11:19:15 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.069 11:19:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.069 ************************************ 00:07:33.069 START TEST bdev_nbd 00:07:33.069 ************************************ 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61268 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61268 /var/tmp/spdk-nbd.sock 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61268 ']' 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.069 11:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:33.327 [2024-11-15 11:19:16.051674] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:33.327 [2024-11-15 11:19:16.051829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.327 [2024-11-15 11:19:16.229313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.586 [2024-11-15 11:19:16.365456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.522 1+0 records in 00:07:34.522 1+0 records out 00:07:34.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637477 s, 6.4 MB/s 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:34.522 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.089 1+0 records in 00:07:35.089 1+0 records out 00:07:35.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502503 s, 8.2 MB/s 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.089 11:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:35.347 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.348 1+0 records in 00:07:35.348 1+0 records out 00:07:35.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470998 s, 8.7 MB/s 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.348 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:35.606 1+0 records in 00:07:35.606 1+0 records out 00:07:35.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618066 s, 6.6 MB/s 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:35.606 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.184 1+0 records in 00:07:36.184 1+0 records out 00:07:36.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012318 s, 3.3 MB/s 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:36.184 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.185 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.185 11:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.460 1+0 records in 00:07:36.460 1+0 records out 00:07:36.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698429 s, 5.9 MB/s 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:36.460 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd0", 00:07:36.719 "bdev_name": "Nvme0n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd1", 00:07:36.719 "bdev_name": "Nvme1n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd2", 00:07:36.719 "bdev_name": "Nvme2n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd3", 00:07:36.719 "bdev_name": "Nvme2n2" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd4", 00:07:36.719 "bdev_name": "Nvme2n3" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd5", 00:07:36.719 "bdev_name": "Nvme3n1" 00:07:36.719 } 00:07:36.719 ]' 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd0", 00:07:36.719 "bdev_name": "Nvme0n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd1", 00:07:36.719 "bdev_name": "Nvme1n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd2", 00:07:36.719 "bdev_name": "Nvme2n1" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd3", 00:07:36.719 "bdev_name": "Nvme2n2" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd4", 00:07:36.719 "bdev_name": "Nvme2n3" 00:07:36.719 }, 00:07:36.719 { 00:07:36.719 "nbd_device": "/dev/nbd5", 00:07:36.719 "bdev_name": "Nvme3n1" 00:07:36.719 } 00:07:36.719 ]' 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.719 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.978 11:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.236 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.494 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.752 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:38.319 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:38.319 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:38.319 11:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.319 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.577 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:38.834 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:38.835 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:38.835 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:38.835 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:39.092 /dev/nbd0 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.092 1+0 records in 00:07:39.092 1+0 records out 00:07:39.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00166944 s, 2.5 MB/s 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.092 11:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:39.350 /dev/nbd1 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.350 1+0 records in 00:07:39.350 1+0 records out 00:07:39.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622973 s, 6.6 MB/s 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.350 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.351 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:39.916 /dev/nbd10 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:39.916 1+0 records in 00:07:39.916 1+0 records out 00:07:39.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596379 s, 6.9 MB/s 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:39.916 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:40.174 /dev/nbd11 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.174 1+0 records in 00:07:40.174 1+0 records out 00:07:40.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576236 s, 7.1 MB/s 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.174 11:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:40.432 /dev/nbd12 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.432 1+0 records in 00:07:40.432 1+0 records out 00:07:40.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762161 s, 5.4 MB/s 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.432 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:40.690 /dev/nbd13 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:40.690 1+0 records in 00:07:40.690 1+0 records out 00:07:40.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744467 s, 5.5 MB/s 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.690 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.948 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:40.948 { 00:07:40.948 "nbd_device": "/dev/nbd0", 00:07:40.948 "bdev_name": "Nvme0n1" 00:07:40.949 }, 00:07:40.949 { 00:07:40.949 "nbd_device": "/dev/nbd1", 00:07:40.949 "bdev_name": "Nvme1n1" 00:07:40.949 }, 00:07:40.949 { 00:07:40.949 "nbd_device": "/dev/nbd10", 00:07:40.949 "bdev_name": "Nvme2n1" 00:07:40.949 }, 00:07:40.949 { 00:07:40.949 "nbd_device": "/dev/nbd11", 00:07:40.949 "bdev_name": "Nvme2n2" 00:07:40.949 }, 00:07:40.949 { 00:07:40.949 "nbd_device": "/dev/nbd12", 00:07:40.949 "bdev_name": "Nvme2n3" 00:07:40.949 }, 00:07:40.949 { 00:07:40.949 "nbd_device": "/dev/nbd13", 00:07:40.949 "bdev_name": "Nvme3n1" 00:07:40.949 } 00:07:40.949 ]' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd0", 00:07:41.207 "bdev_name": "Nvme0n1" 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd1", 00:07:41.207 "bdev_name": "Nvme1n1" 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd10", 00:07:41.207 "bdev_name": "Nvme2n1" 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd11", 00:07:41.207 "bdev_name": "Nvme2n2" 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd12", 00:07:41.207 "bdev_name": "Nvme2n3" 00:07:41.207 }, 00:07:41.207 { 00:07:41.207 "nbd_device": "/dev/nbd13", 00:07:41.207 "bdev_name": "Nvme3n1" 00:07:41.207 } 00:07:41.207 ]' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.207 /dev/nbd1 00:07:41.207 /dev/nbd10 00:07:41.207 /dev/nbd11 00:07:41.207 /dev/nbd12 00:07:41.207 /dev/nbd13' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.207 /dev/nbd1 00:07:41.207 /dev/nbd10 00:07:41.207 /dev/nbd11 00:07:41.207 /dev/nbd12 00:07:41.207 /dev/nbd13' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:41.207 256+0 records in 00:07:41.207 256+0 records out 00:07:41.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00802664 s, 131 MB/s 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.207 11:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.207 256+0 records in 00:07:41.207 256+0 records out 00:07:41.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174256 s, 6.0 MB/s 00:07:41.207 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.207 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.465 256+0 records in 00:07:41.465 256+0 records out 00:07:41.465 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165157 s, 6.3 MB/s 00:07:41.465 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.465 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:41.724 256+0 records in 00:07:41.724 256+0 records out 00:07:41.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.184357 s, 5.7 MB/s 00:07:41.724 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.724 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:41.724 256+0 records in 00:07:41.724 256+0 records out 00:07:41.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177713 s, 5.9 MB/s 00:07:41.982 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.982 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:41.982 256+0 records in 00:07:41.982 256+0 records out 00:07:41.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214104 s, 4.9 MB/s 00:07:41.982 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.983 11:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:42.241 256+0 records in 00:07:42.241 256+0 records out 00:07:42.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.20657 s, 5.1 MB/s 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.241 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.807 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.807 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.808 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:43.067 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.067 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.067 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.067 11:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.325 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.584 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:43.844 11:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.410 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:44.667 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:44.925 malloc_lvol_verify 00:07:44.925 11:19:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:45.491 6451313a-2238-4f96-8463-b15bfd103588 00:07:45.491 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:45.749 a931a7dc-b425-44c7-a52f-77ee7dbcd3d5 00:07:45.749 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:46.008 /dev/nbd0 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:46.008 mke2fs 1.47.0 (5-Feb-2023) 00:07:46.008 Discarding device blocks: 0/4096 done 00:07:46.008 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:46.008 00:07:46.008 Allocating group tables: 0/1 done 00:07:46.008 Writing inode tables: 0/1 done 00:07:46.008 Creating journal (1024 blocks): done 00:07:46.008 Writing superblocks and filesystem accounting information: 0/1 done 00:07:46.008 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.008 11:19:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61268 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61268 ']' 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61268 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61268 00:07:46.267 killing process with pid 61268 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61268' 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61268 00:07:46.267 11:19:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61268 00:07:47.671 ************************************ 00:07:47.671 END TEST bdev_nbd 00:07:47.671 ************************************ 00:07:47.671 11:19:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:47.671 00:07:47.671 real 0m14.365s 00:07:47.671 user 0m20.719s 00:07:47.671 sys 0m4.451s 00:07:47.671 11:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.671 11:19:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:47.671 11:19:30 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:47.671 11:19:30 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:47.671 skipping fio tests on NVMe due to multi-ns failures. 00:07:47.671 11:19:30 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:47.671 11:19:30 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:47.671 11:19:30 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:47.671 11:19:30 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:47.671 11:19:30 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.671 11:19:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.671 ************************************ 00:07:47.671 START TEST bdev_verify 00:07:47.671 ************************************ 00:07:47.671 11:19:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:47.671 [2024-11-15 11:19:30.471517] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:47.672 [2024-11-15 11:19:30.471679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61692 ] 00:07:47.946 [2024-11-15 11:19:30.653883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.946 [2024-11-15 11:19:30.812698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.946 [2024-11-15 11:19:30.812700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.882 Running I/O for 5 seconds... 00:07:51.192 18240.00 IOPS, 71.25 MiB/s [2024-11-15T11:19:35.074Z] 19008.00 IOPS, 74.25 MiB/s [2024-11-15T11:19:36.008Z] 18773.33 IOPS, 73.33 MiB/s [2024-11-15T11:19:36.963Z] 18784.00 IOPS, 73.38 MiB/s [2024-11-15T11:19:36.963Z] 18713.60 IOPS, 73.10 MiB/s 00:07:54.014 Latency(us) 00:07:54.014 [2024-11-15T11:19:36.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.014 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0xbd0bd 00:07:54.014 Nvme0n1 : 5.07 1528.00 5.97 0.00 0.00 83351.31 12749.73 81502.95 00:07:54.014 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:54.014 Nvme0n1 : 5.06 1544.59 6.03 0.00 0.00 82653.21 18469.24 77689.95 00:07:54.014 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0xa0000 00:07:54.014 Nvme1n1 : 5.07 1527.42 5.97 0.00 0.00 83246.17 12630.57 80073.08 00:07:54.014 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0xa0000 length 0xa0000 00:07:54.014 Nvme1n1 : 5.06 1543.97 6.03 0.00 0.00 82514.03 21209.83 73876.95 00:07:54.014 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0x80000 00:07:54.014 Nvme2n1 : 5.09 1535.10 6.00 0.00 0.00 82919.14 12809.31 75783.45 00:07:54.014 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x80000 length 0x80000 00:07:54.014 Nvme2n1 : 5.06 1543.34 6.03 0.00 0.00 82397.84 22997.18 70063.94 00:07:54.014 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0x80000 00:07:54.014 Nvme2n2 : 5.09 1534.53 5.99 0.00 0.00 82804.75 12928.47 71970.44 00:07:54.014 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x80000 length 0x80000 00:07:54.014 Nvme2n2 : 5.06 1542.73 6.03 0.00 0.00 82269.46 22758.87 71017.19 00:07:54.014 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0x80000 00:07:54.014 Nvme2n3 : 5.09 1533.97 5.99 0.00 0.00 82683.07 13166.78 75783.45 00:07:54.014 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x80000 length 0x80000 00:07:54.014 Nvme2n3 : 5.06 1542.15 6.02 0.00 0.00 82159.97 18588.39 75783.45 00:07:54.014 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x0 length 0x20000 00:07:54.014 Nvme3n1 : 5.09 1533.41 5.99 0.00 0.00 82564.22 12571.00 80073.08 00:07:54.014 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:54.014 Verification LBA range: start 0x20000 length 0x20000 00:07:54.014 Nvme3n1 : 5.07 1551.22 6.06 0.00 0.00 81578.97 5957.82 77213.32 00:07:54.014 [2024-11-15T11:19:36.963Z] =================================================================================================================== 00:07:54.014 [2024-11-15T11:19:36.963Z] Total : 18460.43 72.11 0.00 0.00 82593.52 5957.82 81502.95 00:07:55.390 00:07:55.390 real 0m7.715s 00:07:55.390 user 0m14.153s 00:07:55.390 sys 0m0.331s 00:07:55.390 11:19:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.390 ************************************ 00:07:55.390 END TEST bdev_verify 00:07:55.390 ************************************ 00:07:55.390 11:19:38 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:55.390 11:19:38 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.390 11:19:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:55.390 11:19:38 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.390 11:19:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.390 ************************************ 00:07:55.390 START TEST bdev_verify_big_io 00:07:55.390 ************************************ 00:07:55.390 11:19:38 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:55.390 [2024-11-15 11:19:38.236800] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:07:55.390 [2024-11-15 11:19:38.236993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61796 ] 00:07:55.649 [2024-11-15 11:19:38.422180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.649 [2024-11-15 11:19:38.558308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.649 [2024-11-15 11:19:38.558308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.584 Running I/O for 5 seconds... 00:08:02.391 1427.00 IOPS, 89.19 MiB/s [2024-11-15T11:19:45.340Z] 2408.00 IOPS, 150.50 MiB/s [2024-11-15T11:19:45.340Z] 2917.33 IOPS, 182.33 MiB/s 00:08:02.391 Latency(us) 00:08:02.391 [2024-11-15T11:19:45.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.391 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0xbd0b 00:08:02.391 Nvme0n1 : 5.62 125.31 7.83 0.00 0.00 978474.31 13702.98 1067641.02 00:08:02.391 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:02.391 Nvme0n1 : 5.78 131.58 8.22 0.00 0.00 945843.46 20852.36 968502.92 00:08:02.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0xa000 00:08:02.391 Nvme1n1 : 5.69 121.34 7.58 0.00 0.00 977492.07 54335.30 1563331.49 00:08:02.391 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0xa000 length 0xa000 00:08:02.391 Nvme1n1 : 5.78 130.15 8.13 0.00 0.00 925227.30 52190.49 896055.85 00:08:02.391 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0x8000 00:08:02.391 Nvme2n1 : 5.70 125.46 7.84 0.00 0.00 923537.48 73400.32 1601461.53 00:08:02.391 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x8000 length 0x8000 00:08:02.391 Nvme2n1 : 5.78 129.38 8.09 0.00 0.00 901209.21 52428.80 899868.86 00:08:02.391 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0x8000 00:08:02.391 Nvme2n2 : 5.84 138.79 8.67 0.00 0.00 812820.36 67204.19 1189657.13 00:08:02.391 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x8000 length 0x8000 00:08:02.391 Nvme2n2 : 5.78 132.79 8.30 0.00 0.00 860270.78 100091.35 1014258.97 00:08:02.391 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0x8000 00:08:02.391 Nvme2n3 : 5.90 149.73 9.36 0.00 0.00 731488.72 18350.08 1212535.16 00:08:02.391 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x8000 length 0x8000 00:08:02.391 Nvme2n3 : 5.85 142.16 8.88 0.00 0.00 785674.99 13702.98 899868.86 00:08:02.391 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x0 length 0x2000 00:08:02.391 Nvme3n1 : 5.92 163.43 10.21 0.00 0.00 654643.37 1921.40 1738729.66 00:08:02.391 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:02.391 Verification LBA range: start 0x2000 length 0x2000 00:08:02.391 Nvme3n1 : 5.86 152.79 9.55 0.00 0.00 711459.89 1824.58 934185.89 00:08:02.391 [2024-11-15T11:19:45.340Z] =================================================================================================================== 00:08:02.391 [2024-11-15T11:19:45.340Z] Total : 1642.92 102.68 0.00 0.00 840055.60 1824.58 1738729.66 00:08:04.295 00:08:04.295 real 0m8.770s 00:08:04.295 user 0m16.244s 00:08:04.295 sys 0m0.393s 00:08:04.295 11:19:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.295 11:19:46 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:04.295 ************************************ 00:08:04.295 END TEST bdev_verify_big_io 00:08:04.295 ************************************ 00:08:04.296 11:19:46 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.296 11:19:46 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:04.296 11:19:46 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.296 11:19:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.296 ************************************ 00:08:04.296 START TEST bdev_write_zeroes 00:08:04.296 ************************************ 00:08:04.296 11:19:46 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:04.296 [2024-11-15 11:19:47.058495] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:04.296 [2024-11-15 11:19:47.058695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:08:04.296 [2024-11-15 11:19:47.240121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.582 [2024-11-15 11:19:47.368518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.177 Running I/O for 1 seconds... 00:08:06.550 56064.00 IOPS, 219.00 MiB/s 00:08:06.550 Latency(us) 00:08:06.550 [2024-11-15T11:19:49.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.550 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme0n1 : 1.03 9248.13 36.13 0.00 0.00 13804.95 6672.76 28120.90 00:08:06.550 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme1n1 : 1.03 9234.28 36.07 0.00 0.00 13802.10 11200.70 27286.81 00:08:06.550 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme2n1 : 1.03 9220.70 36.02 0.00 0.00 13780.87 10545.34 26452.71 00:08:06.550 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme2n2 : 1.04 9206.93 35.96 0.00 0.00 13739.80 7685.59 25737.77 00:08:06.550 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme2n3 : 1.04 9193.44 35.91 0.00 0.00 13734.75 7268.54 26214.40 00:08:06.550 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:06.550 Nvme3n1 : 1.04 9179.71 35.86 0.00 0.00 13728.99 6970.65 28478.37 00:08:06.550 [2024-11-15T11:19:49.499Z] =================================================================================================================== 00:08:06.550 [2024-11-15T11:19:49.499Z] Total : 55283.20 215.95 0.00 0.00 13765.24 6672.76 28478.37 00:08:07.484 00:08:07.484 real 0m3.258s 00:08:07.484 user 0m2.805s 00:08:07.484 sys 0m0.332s 00:08:07.484 11:19:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.484 11:19:50 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:07.484 ************************************ 00:08:07.484 END TEST bdev_write_zeroes 00:08:07.484 ************************************ 00:08:07.484 11:19:50 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.484 11:19:50 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:07.484 11:19:50 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.484 11:19:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.484 ************************************ 00:08:07.484 START TEST bdev_json_nonenclosed 00:08:07.484 ************************************ 00:08:07.484 11:19:50 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.484 [2024-11-15 11:19:50.409247] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:07.484 [2024-11-15 11:19:50.409475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61958 ] 00:08:07.742 [2024-11-15 11:19:50.599822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.999 [2024-11-15 11:19:50.767514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.999 [2024-11-15 11:19:50.767649] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:07.999 [2024-11-15 11:19:50.767679] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:07.999 [2024-11-15 11:19:50.767694] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.256 00:08:08.256 real 0m0.774s 00:08:08.256 user 0m0.503s 00:08:08.256 sys 0m0.162s 00:08:08.256 11:19:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.256 11:19:51 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:08.256 ************************************ 00:08:08.256 END TEST bdev_json_nonenclosed 00:08:08.256 ************************************ 00:08:08.256 11:19:51 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.256 11:19:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:08.256 11:19:51 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.256 11:19:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.256 ************************************ 00:08:08.256 START TEST bdev_json_nonarray 00:08:08.256 ************************************ 00:08:08.256 11:19:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.256 [2024-11-15 11:19:51.195128] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:08.256 [2024-11-15 11:19:51.195331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61989 ] 00:08:08.514 [2024-11-15 11:19:51.383638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.771 [2024-11-15 11:19:51.515106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.771 [2024-11-15 11:19:51.515230] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:08.771 [2024-11-15 11:19:51.515259] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:08.771 [2024-11-15 11:19:51.515274] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.030 00:08:09.030 real 0m0.681s 00:08:09.030 user 0m0.453s 00:08:09.030 sys 0m0.122s 00:08:09.030 11:19:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.030 ************************************ 00:08:09.030 END TEST bdev_json_nonarray 00:08:09.030 ************************************ 00:08:09.030 11:19:51 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:09.030 11:19:51 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:09.030 00:08:09.030 real 0m45.394s 00:08:09.030 user 1m8.418s 00:08:09.030 sys 0m7.550s 00:08:09.030 11:19:51 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.030 11:19:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.030 ************************************ 00:08:09.030 END TEST blockdev_nvme 00:08:09.030 ************************************ 00:08:09.030 11:19:51 -- spdk/autotest.sh@209 -- # uname -s 00:08:09.030 11:19:51 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:09.030 11:19:51 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.030 11:19:51 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:09.030 11:19:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.030 11:19:51 -- common/autotest_common.sh@10 -- # set +x 00:08:09.030 ************************************ 00:08:09.030 START TEST blockdev_nvme_gpt 00:08:09.030 ************************************ 00:08:09.030 11:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:09.030 * Looking for test storage... 00:08:09.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:09.030 11:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:09.030 11:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:09.030 11:19:51 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.289 11:19:52 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.289 --rc genhtml_branch_coverage=1 00:08:09.289 --rc genhtml_function_coverage=1 00:08:09.289 --rc genhtml_legend=1 00:08:09.289 --rc geninfo_all_blocks=1 00:08:09.289 --rc geninfo_unexecuted_blocks=1 00:08:09.289 00:08:09.289 ' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.289 --rc genhtml_branch_coverage=1 00:08:09.289 --rc genhtml_function_coverage=1 00:08:09.289 --rc genhtml_legend=1 00:08:09.289 --rc geninfo_all_blocks=1 00:08:09.289 --rc geninfo_unexecuted_blocks=1 00:08:09.289 00:08:09.289 ' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.289 --rc genhtml_branch_coverage=1 00:08:09.289 --rc genhtml_function_coverage=1 00:08:09.289 --rc genhtml_legend=1 00:08:09.289 --rc geninfo_all_blocks=1 00:08:09.289 --rc geninfo_unexecuted_blocks=1 00:08:09.289 00:08:09.289 ' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:09.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.289 --rc genhtml_branch_coverage=1 00:08:09.289 --rc genhtml_function_coverage=1 00:08:09.289 --rc genhtml_legend=1 00:08:09.289 --rc geninfo_all_blocks=1 00:08:09.289 --rc geninfo_unexecuted_blocks=1 00:08:09.289 00:08:09.289 ' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:09.289 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62073 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62073 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 62073 ']' 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.290 11:19:52 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.290 11:19:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.290 [2024-11-15 11:19:52.185935] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:09.290 [2024-11-15 11:19:52.186149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62073 ] 00:08:09.607 [2024-11-15 11:19:52.372367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.607 [2024-11-15 11:19:52.504930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.540 11:19:53 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:10.540 11:19:53 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:08:10.540 11:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:10.540 11:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:10.540 11:19:53 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:10.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:11.055 Waiting for block devices as requested 00:08:11.055 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.314 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.314 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:11.314 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:16.580 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:16.580 BYT; 00:08:16.580 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:16.580 BYT; 00:08:16.580 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.580 11:19:59 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:16.580 11:19:59 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:17.510 The operation has completed successfully. 00:08:17.511 11:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:18.887 The operation has completed successfully. 00:08:18.887 11:20:01 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.711 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.711 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.711 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.711 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.711 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:19.711 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.711 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.711 [] 00:08:19.711 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.711 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:19.711 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:19.711 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:19.711 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.970 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:19.970 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.970 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:20.229 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:20.229 11:20:02 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.229 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:20.229 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:20.230 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9054c716-10da-46f3-906c-338aba01824d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9054c716-10da-46f3-906c-338aba01824d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8bddcf5b-ee35-4c0a-935c-341acc54ab51"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8bddcf5b-ee35-4c0a-935c-341acc54ab51",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b496cc1f-9393-4ad1-8f9c-c78bf149d2ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b496cc1f-9393-4ad1-8f9c-c78bf149d2ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "40f6eb3d-668d-4165-b307-67daa8e1f1e0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "40f6eb3d-668d-4165-b307-67daa8e1f1e0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3a7d0132-8b91-43b2-bbbc-c0c6678a8840"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3a7d0132-8b91-43b2-bbbc-c0c6678a8840",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:20.488 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:20.488 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:20.488 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:20.488 11:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62073 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 62073 ']' 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 62073 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62073 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.488 killing process with pid 62073 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62073' 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 62073 00:08:20.488 11:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 62073 00:08:23.035 11:20:05 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:23.035 11:20:05 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:23.035 11:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:23.035 11:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:23.035 11:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.035 ************************************ 00:08:23.035 START TEST bdev_hello_world 00:08:23.035 ************************************ 00:08:23.035 11:20:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:23.035 [2024-11-15 11:20:05.583759] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:23.035 [2024-11-15 11:20:05.583975] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62710 ] 00:08:23.035 [2024-11-15 11:20:05.762449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.035 [2024-11-15 11:20:05.897340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.967 [2024-11-15 11:20:06.576869] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:23.967 [2024-11-15 11:20:06.576944] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:23.967 [2024-11-15 11:20:06.576984] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:23.967 [2024-11-15 11:20:06.580277] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:23.967 [2024-11-15 11:20:06.580791] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:23.967 [2024-11-15 11:20:06.580839] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:23.967 [2024-11-15 11:20:06.581083] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:23.967 00:08:23.967 [2024-11-15 11:20:06.581130] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:24.901 00:08:24.902 real 0m2.182s 00:08:24.902 user 0m1.788s 00:08:24.902 sys 0m0.282s 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:24.902 ************************************ 00:08:24.902 END TEST bdev_hello_world 00:08:24.902 ************************************ 00:08:24.902 11:20:07 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:24.902 11:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:24.902 11:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.902 11:20:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.902 ************************************ 00:08:24.902 START TEST bdev_bounds 00:08:24.902 ************************************ 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62752 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:24.902 Process bdevio pid: 62752 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62752' 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62752 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62752 ']' 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.902 11:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:24.902 [2024-11-15 11:20:07.808605] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:24.902 [2024-11-15 11:20:07.808813] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62752 ] 00:08:25.160 [2024-11-15 11:20:07.984492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.418 [2024-11-15 11:20:08.121832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.418 [2024-11-15 11:20:08.121923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.418 [2024-11-15 11:20:08.121948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.984 11:20:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.984 11:20:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:25.984 11:20:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:26.243 I/O targets: 00:08:26.243 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:26.243 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:26.243 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:26.243 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:26.243 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:26.243 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:26.243 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:26.243 00:08:26.243 00:08:26.243 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.243 http://cunit.sourceforge.net/ 00:08:26.243 00:08:26.243 00:08:26.243 Suite: bdevio tests on: Nvme3n1 00:08:26.243 Test: blockdev write read block ...passed 00:08:26.243 Test: blockdev write zeroes read block ...passed 00:08:26.243 Test: blockdev write zeroes read no split ...passed 00:08:26.243 Test: blockdev write zeroes read split ...passed 00:08:26.243 Test: blockdev write zeroes read split partial ...passed 00:08:26.243 Test: blockdev reset ...[2024-11-15 11:20:08.990075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:26.243 [2024-11-15 11:20:08.993988] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:26.243 Test: blockdev write read 8 blocks ...uccessful. 00:08:26.243 passed 00:08:26.243 Test: blockdev write read size > 128k ...passed 00:08:26.243 Test: blockdev write read invalid size ...passed 00:08:26.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.243 Test: blockdev write read max offset ...passed 00:08:26.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.243 Test: blockdev writev readv 8 blocks ...passed 00:08:26.243 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.243 Test: blockdev writev readv block ...passed 00:08:26.243 Test: blockdev writev readv size > 128k ...passed 00:08:26.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.243 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.002964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8004000 len:0x1000 00:08:26.243 [2024-11-15 11:20:09.003043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.243 passed 00:08:26.243 Test: blockdev nvme passthru rw ...passed 00:08:26.243 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:20:09.004061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:26.243 [2024-11-15 11:20:09.004110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:26.243 passed 00:08:26.243 Test: blockdev nvme admin passthru ...passed 00:08:26.243 Test: blockdev copy ...passed 00:08:26.243 Suite: bdevio tests on: Nvme2n3 00:08:26.243 Test: blockdev write read block ...passed 00:08:26.243 Test: blockdev write zeroes read block ...passed 00:08:26.243 Test: blockdev write zeroes read no split ...passed 00:08:26.243 Test: blockdev write zeroes read split ...passed 00:08:26.243 Test: blockdev write zeroes read split partial ...passed 00:08:26.243 Test: blockdev reset ...[2024-11-15 11:20:09.068914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:26.243 [2024-11-15 11:20:09.073446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:08:26.243 00:08:26.243 Test: blockdev write read 8 blocks ...passed 00:08:26.243 Test: blockdev write read size > 128k ...passed 00:08:26.243 Test: blockdev write read invalid size ...passed 00:08:26.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.243 Test: blockdev write read max offset ...passed 00:08:26.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.243 Test: blockdev writev readv 8 blocks ...passed 00:08:26.243 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.243 Test: blockdev writev readv block ...passed 00:08:26.243 Test: blockdev writev readv size > 128k ...passed 00:08:26.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.243 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.082007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8002000 len:0x1000 00:08:26.243 [2024-11-15 11:20:09.082083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.243 passed 00:08:26.243 Test: blockdev nvme passthru rw ...passed 00:08:26.243 Test: blockdev nvme passthru vendor specific ...passed 00:08:26.243 Test: blockdev nvme admin passthru ...[2024-11-15 11:20:09.082798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:26.243 [2024-11-15 11:20:09.082846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:26.243 passed 00:08:26.243 Test: blockdev copy ...passed 00:08:26.243 Suite: bdevio tests on: Nvme2n2 00:08:26.243 Test: blockdev write read block ...passed 00:08:26.243 Test: blockdev write zeroes read block ...passed 00:08:26.243 Test: blockdev write zeroes read no split ...passed 00:08:26.243 Test: blockdev write zeroes read split ...passed 00:08:26.243 Test: blockdev write zeroes read split partial ...passed 00:08:26.243 Test: blockdev reset ...[2024-11-15 11:20:09.146675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:26.243 [2024-11-15 11:20:09.151082] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:26.243 passed 00:08:26.244 Test: blockdev write read 8 blocks ...passed 00:08:26.244 Test: blockdev write read size > 128k ...passed 00:08:26.244 Test: blockdev write read invalid size ...passed 00:08:26.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.244 Test: blockdev write read max offset ...passed 00:08:26.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.244 Test: blockdev writev readv 8 blocks ...passed 00:08:26.244 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.244 Test: blockdev writev readv block ...passed 00:08:26.244 Test: blockdev writev readv size > 128k ...passed 00:08:26.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.244 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.159781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dc638000 len:0x1000 00:08:26.244 [2024-11-15 11:20:09.159840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.244 passed 00:08:26.244 Test: blockdev nvme passthru rw ...passed 00:08:26.244 Test: blockdev nvme passthru vendor specific ...passed 00:08:26.244 Test: blockdev nvme admin passthru ...[2024-11-15 11:20:09.160713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:26.244 [2024-11-15 11:20:09.160761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:26.244 passed 00:08:26.244 Test: blockdev copy ...passed 00:08:26.244 Suite: bdevio tests on: Nvme2n1 00:08:26.244 Test: blockdev write read block ...passed 00:08:26.244 Test: blockdev write zeroes read block ...passed 00:08:26.244 Test: blockdev write zeroes read no split ...passed 00:08:26.502 Test: blockdev write zeroes read split ...passed 00:08:26.502 Test: blockdev write zeroes read split partial ...passed 00:08:26.502 Test: blockdev reset ...[2024-11-15 11:20:09.225270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:26.502 [2024-11-15 11:20:09.229529] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:26.502 passed 00:08:26.502 Test: blockdev write read 8 blocks ...passed 00:08:26.502 Test: blockdev write read size > 128k ...passed 00:08:26.502 Test: blockdev write read invalid size ...passed 00:08:26.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.502 Test: blockdev write read max offset ...passed 00:08:26.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.502 Test: blockdev writev readv 8 blocks ...passed 00:08:26.502 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.502 Test: blockdev writev readv block ...passed 00:08:26.502 Test: blockdev writev readv size > 128k ...passed 00:08:26.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.502 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.239737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dc634000 len:0x1000 00:08:26.502 passed 00:08:26.502 Test: blockdev nvme passthru rw ...[2024-11-15 11:20:09.239953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.502 passed 00:08:26.502 Test: blockdev nvme passthru vendor specific ...passed 00:08:26.502 Test: blockdev nvme admin passthru ...[2024-11-15 11:20:09.240798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:26.502 [2024-11-15 11:20:09.240850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:26.502 passed 00:08:26.503 Test: blockdev copy ...passed 00:08:26.503 Suite: bdevio tests on: Nvme1n1p2 00:08:26.503 Test: blockdev write read block ...passed 00:08:26.503 Test: blockdev write zeroes read block ...passed 00:08:26.503 Test: blockdev write zeroes read no split ...passed 00:08:26.503 Test: blockdev write zeroes read split ...passed 00:08:26.503 Test: blockdev write zeroes read split partial ...passed 00:08:26.503 Test: blockdev reset ...[2024-11-15 11:20:09.302066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:26.503 [2024-11-15 11:20:09.306085] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:26.503 passed 00:08:26.503 Test: blockdev write read 8 blocks ...passed 00:08:26.503 Test: blockdev write read size > 128k ...passed 00:08:26.503 Test: blockdev write read invalid size ...passed 00:08:26.503 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.503 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.503 Test: blockdev write read max offset ...passed 00:08:26.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.503 Test: blockdev writev readv 8 blocks ...passed 00:08:26.503 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.503 Test: blockdev writev readv block ...passed 00:08:26.503 Test: blockdev writev readv size > 128k ...passed 00:08:26.503 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.503 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.315778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dc630000 len:0x1000 00:08:26.503 [2024-11-15 11:20:09.315850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.503 passed 00:08:26.503 Test: blockdev nvme passthru rw ...passed 00:08:26.503 Test: blockdev nvme passthru vendor specific ...passed 00:08:26.503 Test: blockdev nvme admin passthru ...passed 00:08:26.503 Test: blockdev copy ...passed 00:08:26.503 Suite: bdevio tests on: Nvme1n1p1 00:08:26.503 Test: blockdev write read block ...passed 00:08:26.503 Test: blockdev write zeroes read block ...passed 00:08:26.503 Test: blockdev write zeroes read no split ...passed 00:08:26.503 Test: blockdev write zeroes read split ...passed 00:08:26.503 Test: blockdev write zeroes read split partial ...passed 00:08:26.503 Test: blockdev reset ...[2024-11-15 11:20:09.378357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:26.503 [2024-11-15 11:20:09.382031] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:26.503 passed 00:08:26.503 Test: blockdev write read 8 blocks ...passed 00:08:26.503 Test: blockdev write read size > 128k ...passed 00:08:26.503 Test: blockdev write read invalid size ...passed 00:08:26.503 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.503 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.503 Test: blockdev write read max offset ...passed 00:08:26.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.503 Test: blockdev writev readv 8 blocks ...passed 00:08:26.503 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.503 Test: blockdev writev readv block ...passed 00:08:26.503 Test: blockdev writev readv size > 128k ...passed 00:08:26.503 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.503 Test: blockdev comparev and writev ...[2024-11-15 11:20:09.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c8a0e000 len:0x1000 00:08:26.503 [2024-11-15 11:20:09.391478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:26.503 passed 00:08:26.503 Test: blockdev nvme passthru rw ...passed 00:08:26.503 Test: blockdev nvme passthru vendor specific ...passed 00:08:26.503 Test: blockdev nvme admin passthru ...passed 00:08:26.503 Test: blockdev copy ...passed 00:08:26.503 Suite: bdevio tests on: Nvme0n1 00:08:26.503 Test: blockdev write read block ...passed 00:08:26.503 Test: blockdev write zeroes read block ...passed 00:08:26.503 Test: blockdev write zeroes read no split ...passed 00:08:26.503 Test: blockdev write zeroes read split ...passed 00:08:26.503 Test: blockdev write zeroes read split partial ...passed 00:08:26.503 Test: blockdev reset ...[2024-11-15 11:20:09.446043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:26.761 [2024-11-15 11:20:09.449958] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:26.761 Test: blockdev write read 8 blocks ...passed 00:08:26.761 Test: blockdev write read size > 128k ...uccessful. 00:08:26.761 passed 00:08:26.761 Test: blockdev write read invalid size ...passed 00:08:26.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:26.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:26.761 Test: blockdev write read max offset ...passed 00:08:26.761 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:26.761 Test: blockdev writev readv 8 blocks ...passed 00:08:26.761 Test: blockdev writev readv 30 x 1block ...passed 00:08:26.761 Test: blockdev writev readv block ...passed 00:08:26.761 Test: blockdev writev readv size > 128k ...passed 00:08:26.761 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:26.761 Test: blockdev comparev and writev ...passed 00:08:26.761 Test: blockdev nvme passthru rw ...[2024-11-15 11:20:09.457086] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:26.761 separate metadata which is not supported yet. 00:08:26.761 passed 00:08:26.761 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:20:09.457665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:08:26.761 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:26.761 [2024-11-15 11:20:09.457831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:26.761 passed 00:08:26.761 Test: blockdev copy ...passed 00:08:26.761 00:08:26.761 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.761 suites 7 7 n/a 0 0 00:08:26.761 tests 161 161 161 0 0 00:08:26.761 asserts 1025 1025 1025 0 n/a 00:08:26.761 00:08:26.761 Elapsed time = 1.432 seconds 00:08:26.761 0 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62752 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62752 ']' 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62752 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62752 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62752' 00:08:26.761 killing process with pid 62752 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62752 00:08:26.761 11:20:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62752 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:27.695 00:08:27.695 real 0m2.785s 00:08:27.695 user 0m7.192s 00:08:27.695 sys 0m0.427s 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:27.695 ************************************ 00:08:27.695 END TEST bdev_bounds 00:08:27.695 ************************************ 00:08:27.695 11:20:10 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:27.695 11:20:10 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:27.695 11:20:10 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.695 11:20:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.695 ************************************ 00:08:27.695 START TEST bdev_nbd 00:08:27.695 ************************************ 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62812 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62812 /var/tmp/spdk-nbd.sock 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62812 ']' 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:27.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.695 11:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:27.953 [2024-11-15 11:20:10.650712] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:27.953 [2024-11-15 11:20:10.651133] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.953 [2024-11-15 11:20:10.832227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.211 [2024-11-15 11:20:10.954342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:28.777 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.035 1+0 records in 00:08:29.035 1+0 records out 00:08:29.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465422 s, 8.8 MB/s 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:29.035 11:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:29.293 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.562 1+0 records in 00:08:29.562 1+0 records out 00:08:29.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666129 s, 6.1 MB/s 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:29.562 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.836 1+0 records in 00:08:29.836 1+0 records out 00:08:29.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674485 s, 6.1 MB/s 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:29.836 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.095 1+0 records in 00:08:30.095 1+0 records out 00:08:30.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650445 s, 6.3 MB/s 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:30.095 11:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.353 1+0 records in 00:08:30.353 1+0 records out 00:08:30.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107506 s, 3.8 MB/s 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:30.353 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.919 1+0 records in 00:08:30.919 1+0 records out 00:08:30.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600105 s, 6.8 MB/s 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:30.919 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.178 1+0 records in 00:08:31.178 1+0 records out 00:08:31.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902234 s, 4.5 MB/s 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:31.178 11:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd0", 00:08:31.436 "bdev_name": "Nvme0n1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd1", 00:08:31.436 "bdev_name": "Nvme1n1p1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd2", 00:08:31.436 "bdev_name": "Nvme1n1p2" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd3", 00:08:31.436 "bdev_name": "Nvme2n1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd4", 00:08:31.436 "bdev_name": "Nvme2n2" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd5", 00:08:31.436 "bdev_name": "Nvme2n3" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd6", 00:08:31.436 "bdev_name": "Nvme3n1" 00:08:31.436 } 00:08:31.436 ]' 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd0", 00:08:31.436 "bdev_name": "Nvme0n1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd1", 00:08:31.436 "bdev_name": "Nvme1n1p1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd2", 00:08:31.436 "bdev_name": "Nvme1n1p2" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd3", 00:08:31.436 "bdev_name": "Nvme2n1" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd4", 00:08:31.436 "bdev_name": "Nvme2n2" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd5", 00:08:31.436 "bdev_name": "Nvme2n3" 00:08:31.436 }, 00:08:31.436 { 00:08:31.436 "nbd_device": "/dev/nbd6", 00:08:31.436 "bdev_name": "Nvme3n1" 00:08:31.436 } 00:08:31.436 ]' 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.436 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.694 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.954 11:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:32.214 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:32.214 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.215 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.473 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.731 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.989 11:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.247 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:33.505 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:33.505 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:33.505 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:33.763 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:34.021 /dev/nbd0 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.021 1+0 records in 00:08:34.021 1+0 records out 00:08:34.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405247 s, 10.1 MB/s 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:34.021 11:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:34.280 /dev/nbd1 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.280 1+0 records in 00:08:34.280 1+0 records out 00:08:34.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482619 s, 8.5 MB/s 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:34.280 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:34.538 /dev/nbd10 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:34.538 1+0 records in 00:08:34.538 1+0 records out 00:08:34.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649202 s, 6.3 MB/s 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:34.538 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:35.105 /dev/nbd11 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.105 1+0 records in 00:08:35.105 1+0 records out 00:08:35.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585863 s, 7.0 MB/s 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:35.105 11:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:35.363 /dev/nbd12 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.363 1+0 records in 00:08:35.363 1+0 records out 00:08:35.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682144 s, 6.0 MB/s 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:35.363 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:35.621 /dev/nbd13 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.621 1+0 records in 00:08:35.621 1+0 records out 00:08:35.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821896 s, 5.0 MB/s 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:35.621 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:35.879 /dev/nbd14 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.879 1+0 records in 00:08:35.879 1+0 records out 00:08:35.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702043 s, 5.8 MB/s 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.879 11:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.137 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd0", 00:08:36.137 "bdev_name": "Nvme0n1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd1", 00:08:36.137 "bdev_name": "Nvme1n1p1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd10", 00:08:36.137 "bdev_name": "Nvme1n1p2" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd11", 00:08:36.137 "bdev_name": "Nvme2n1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd12", 00:08:36.137 "bdev_name": "Nvme2n2" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd13", 00:08:36.137 "bdev_name": "Nvme2n3" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd14", 00:08:36.137 "bdev_name": "Nvme3n1" 00:08:36.137 } 00:08:36.137 ]' 00:08:36.137 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd0", 00:08:36.137 "bdev_name": "Nvme0n1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd1", 00:08:36.137 "bdev_name": "Nvme1n1p1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd10", 00:08:36.137 "bdev_name": "Nvme1n1p2" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd11", 00:08:36.137 "bdev_name": "Nvme2n1" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd12", 00:08:36.137 "bdev_name": "Nvme2n2" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd13", 00:08:36.137 "bdev_name": "Nvme2n3" 00:08:36.137 }, 00:08:36.137 { 00:08:36.137 "nbd_device": "/dev/nbd14", 00:08:36.137 "bdev_name": "Nvme3n1" 00:08:36.137 } 00:08:36.137 ]' 00:08:36.137 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:36.395 /dev/nbd1 00:08:36.395 /dev/nbd10 00:08:36.395 /dev/nbd11 00:08:36.395 /dev/nbd12 00:08:36.395 /dev/nbd13 00:08:36.395 /dev/nbd14' 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:36.395 /dev/nbd1 00:08:36.395 /dev/nbd10 00:08:36.395 /dev/nbd11 00:08:36.395 /dev/nbd12 00:08:36.395 /dev/nbd13 00:08:36.395 /dev/nbd14' 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:36.395 256+0 records in 00:08:36.395 256+0 records out 00:08:36.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059083 s, 177 MB/s 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:36.395 256+0 records in 00:08:36.395 256+0 records out 00:08:36.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18917 s, 5.5 MB/s 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.395 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:36.652 256+0 records in 00:08:36.652 256+0 records out 00:08:36.652 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.192449 s, 5.4 MB/s 00:08:36.652 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.652 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:36.911 256+0 records in 00:08:36.911 256+0 records out 00:08:36.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167101 s, 6.3 MB/s 00:08:36.911 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.911 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:37.169 256+0 records in 00:08:37.169 256+0 records out 00:08:37.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156643 s, 6.7 MB/s 00:08:37.169 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.169 11:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:37.169 256+0 records in 00:08:37.169 256+0 records out 00:08:37.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179741 s, 5.8 MB/s 00:08:37.169 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.169 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:37.427 256+0 records in 00:08:37.427 256+0 records out 00:08:37.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18834 s, 5.6 MB/s 00:08:37.427 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.427 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:37.685 256+0 records in 00:08:37.685 256+0 records out 00:08:37.685 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156571 s, 6.7 MB/s 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.685 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.943 11:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.201 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.459 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.717 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:38.976 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.233 11:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:39.491 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.749 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:40.007 11:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:40.265 malloc_lvol_verify 00:08:40.265 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:40.523 000d9e8d-28be-40a2-8bf6-ddd39a6a54fb 00:08:40.523 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:40.782 189ee0a6-a6c3-4353-a04f-13dcf1b45480 00:08:40.782 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:41.042 /dev/nbd0 00:08:41.042 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:41.042 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:41.042 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:41.042 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:41.042 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:41.042 mke2fs 1.47.0 (5-Feb-2023) 00:08:41.042 Discarding device blocks: 0/4096 done 00:08:41.042 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:41.042 00:08:41.042 Allocating group tables: 0/1 done 00:08:41.042 Writing inode tables: 0/1 done 00:08:41.300 Creating journal (1024 blocks): done 00:08:41.300 Writing superblocks and filesystem accounting information: 0/1 done 00:08:41.300 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.300 11:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62812 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62812 ']' 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62812 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62812 00:08:41.559 killing process with pid 62812 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62812' 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62812 00:08:41.559 11:20:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62812 00:08:42.494 11:20:25 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:42.494 00:08:42.494 real 0m14.823s 00:08:42.494 user 0m21.109s 00:08:42.494 sys 0m4.881s 00:08:42.494 11:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.494 11:20:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 ************************************ 00:08:42.494 END TEST bdev_nbd 00:08:42.494 ************************************ 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:42.494 skipping fio tests on NVMe due to multi-ns failures. 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:42.494 11:20:25 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:42.494 11:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:42.494 11:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.494 11:20:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.494 ************************************ 00:08:42.494 START TEST bdev_verify 00:08:42.494 ************************************ 00:08:42.494 11:20:25 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:42.753 [2024-11-15 11:20:25.545659] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:42.753 [2024-11-15 11:20:25.545866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63268 ] 00:08:43.012 [2024-11-15 11:20:25.729262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.012 [2024-11-15 11:20:25.851565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.012 [2024-11-15 11:20:25.851583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.948 Running I/O for 5 seconds... 00:08:45.826 19712.00 IOPS, 77.00 MiB/s [2024-11-15T11:20:30.147Z] 19712.00 IOPS, 77.00 MiB/s [2024-11-15T11:20:31.081Z] 19669.33 IOPS, 76.83 MiB/s [2024-11-15T11:20:32.016Z] 19776.00 IOPS, 77.25 MiB/s [2024-11-15T11:20:32.016Z] 19520.00 IOPS, 76.25 MiB/s 00:08:49.067 Latency(us) 00:08:49.067 [2024-11-15T11:20:32.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0xbd0bd 00:08:49.067 Nvme0n1 : 5.10 1406.00 5.49 0.00 0.00 90847.64 20018.27 77213.32 00:08:49.067 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:49.067 Nvme0n1 : 5.10 1355.27 5.29 0.00 0.00 94213.04 22758.87 88175.71 00:08:49.067 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x4ff80 00:08:49.067 Nvme1n1p1 : 5.10 1405.51 5.49 0.00 0.00 90744.75 18826.71 75306.82 00:08:49.067 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:49.067 Nvme1n1p1 : 5.10 1354.49 5.29 0.00 0.00 93988.99 21567.30 83886.08 00:08:49.067 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x4ff7f 00:08:49.067 Nvme1n1p2 : 5.10 1404.71 5.49 0.00 0.00 90643.81 17515.99 74353.57 00:08:49.067 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:49.067 Nvme1n1p2 : 5.11 1353.81 5.29 0.00 0.00 93832.62 22163.08 81979.58 00:08:49.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x80000 00:08:49.067 Nvme2n1 : 5.11 1403.60 5.48 0.00 0.00 90539.68 20494.89 69587.32 00:08:49.067 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x80000 length 0x80000 00:08:49.067 Nvme2n1 : 5.11 1353.02 5.29 0.00 0.00 93689.38 23354.65 80073.08 00:08:49.067 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x80000 00:08:49.067 Nvme2n2 : 5.11 1403.22 5.48 0.00 0.00 90399.14 19660.80 68157.44 00:08:49.067 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x80000 length 0x80000 00:08:49.067 Nvme2n2 : 5.11 1352.43 5.28 0.00 0.00 93562.78 22997.18 78166.57 00:08:49.067 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x80000 00:08:49.067 Nvme2n3 : 5.11 1402.60 5.48 0.00 0.00 90271.78 20018.27 71493.82 00:08:49.067 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x80000 length 0x80000 00:08:49.067 Nvme2n3 : 5.11 1351.85 5.28 0.00 0.00 93431.89 20375.74 81026.33 00:08:49.067 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x0 length 0x20000 00:08:49.067 Nvme3n1 : 5.11 1402.00 5.48 0.00 0.00 90150.82 14775.39 76260.07 00:08:49.067 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:49.067 Verification LBA range: start 0x20000 length 0x20000 00:08:49.067 Nvme3n1 : 5.12 1351.30 5.28 0.00 0.00 93347.48 18469.24 86269.21 00:08:49.067 [2024-11-15T11:20:32.016Z] =================================================================================================================== 00:08:49.067 [2024-11-15T11:20:32.016Z] Total : 19299.81 75.39 0.00 0.00 92089.66 14775.39 88175.71 00:08:50.441 00:08:50.441 real 0m7.671s 00:08:50.441 user 0m14.079s 00:08:50.441 sys 0m0.359s 00:08:50.441 11:20:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.441 11:20:33 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:50.441 ************************************ 00:08:50.441 END TEST bdev_verify 00:08:50.441 ************************************ 00:08:50.441 11:20:33 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:50.441 11:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:50.441 11:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.441 11:20:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:50.441 ************************************ 00:08:50.441 START TEST bdev_verify_big_io 00:08:50.441 ************************************ 00:08:50.441 11:20:33 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:50.441 [2024-11-15 11:20:33.249150] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:50.441 [2024-11-15 11:20:33.249347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63370 ] 00:08:50.698 [2024-11-15 11:20:33.425831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.698 [2024-11-15 11:20:33.549048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.698 [2024-11-15 11:20:33.549071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.632 Running I/O for 5 seconds... 00:08:56.908 933.00 IOPS, 58.31 MiB/s [2024-11-15T11:20:40.425Z] 2664.50 IOPS, 166.53 MiB/s [2024-11-15T11:20:40.683Z] 2969.33 IOPS, 185.58 MiB/s 00:08:57.734 Latency(us) 00:08:57.734 [2024-11-15T11:20:40.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.734 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0xbd0b 00:08:57.734 Nvme0n1 : 6.02 84.99 5.31 0.00 0.00 1427333.12 19541.64 2074273.98 00:08:57.734 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:57.734 Nvme0n1 : 5.86 84.68 5.29 0.00 0.00 1448107.77 27763.43 1906501.82 00:08:57.734 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x4ff8 00:08:57.734 Nvme1n1p1 : 5.86 120.24 7.51 0.00 0.00 997289.34 88175.71 941811.90 00:08:57.734 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:57.734 Nvme1n1p1 : 5.79 120.93 7.56 0.00 0.00 984019.81 83409.45 960876.92 00:08:57.734 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x4ff7 00:08:57.734 Nvme1n1p2 : 5.86 120.17 7.51 0.00 0.00 968327.91 106287.48 911307.87 00:08:57.734 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:57.734 Nvme1n1p2 : 5.91 126.09 7.88 0.00 0.00 925896.13 66727.56 907494.87 00:08:57.734 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x8000 00:08:57.734 Nvme2n1 : 5.95 122.60 7.66 0.00 0.00 926657.82 63391.19 907494.87 00:08:57.734 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x8000 length 0x8000 00:08:57.734 Nvme2n1 : 5.91 125.01 7.81 0.00 0.00 905791.53 66250.94 968502.92 00:08:57.734 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x8000 00:08:57.734 Nvme2n2 : 5.96 120.36 7.52 0.00 0.00 925194.57 25499.46 1670095.59 00:08:57.734 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x8000 length 0x8000 00:08:57.734 Nvme2n2 : 5.91 129.87 8.12 0.00 0.00 857291.56 48377.48 983754.94 00:08:57.734 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x8000 00:08:57.734 Nvme2n3 : 6.01 125.04 7.82 0.00 0.00 863472.54 27644.28 1898875.81 00:08:57.734 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x8000 length 0x8000 00:08:57.734 Nvme2n3 : 5.94 134.34 8.40 0.00 0.00 806884.34 23473.80 999006.95 00:08:57.734 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x0 length 0x2000 00:08:57.734 Nvme3n1 : 6.05 141.53 8.85 0.00 0.00 743465.92 6285.50 1937005.85 00:08:57.734 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:57.734 Verification LBA range: start 0x2000 length 0x2000 00:08:57.734 Nvme3n1 : 6.02 153.63 9.60 0.00 0.00 688940.23 2263.97 1014258.97 00:08:57.734 [2024-11-15T11:20:40.683Z] =================================================================================================================== 00:08:57.734 [2024-11-15T11:20:40.683Z] Total : 1709.47 106.84 0.00 0.00 931010.32 2263.97 2074273.98 00:08:59.636 00:08:59.636 real 0m9.006s 00:08:59.636 user 0m16.749s 00:08:59.636 sys 0m0.402s 00:08:59.636 11:20:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.636 ************************************ 00:08:59.636 END TEST bdev_verify_big_io 00:08:59.636 11:20:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:59.636 ************************************ 00:08:59.636 11:20:42 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.636 11:20:42 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:59.636 11:20:42 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.636 11:20:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.636 ************************************ 00:08:59.636 START TEST bdev_write_zeroes 00:08:59.636 ************************************ 00:08:59.636 11:20:42 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:59.636 [2024-11-15 11:20:42.333333] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:08:59.636 [2024-11-15 11:20:42.333534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63486 ] 00:08:59.636 [2024-11-15 11:20:42.518408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.895 [2024-11-15 11:20:42.630185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.462 Running I/O for 1 seconds... 00:09:01.651 56448.00 IOPS, 220.50 MiB/s 00:09:01.651 Latency(us) 00:09:01.651 [2024-11-15T11:20:44.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.651 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.651 Nvme0n1 : 1.03 8024.27 31.34 0.00 0.00 15882.85 13047.62 32648.84 00:09:01.651 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.651 Nvme1n1p1 : 1.03 8010.04 31.29 0.00 0.00 15879.33 13345.51 33125.47 00:09:01.651 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.652 Nvme1n1p2 : 1.03 7997.19 31.24 0.00 0.00 15830.86 13107.20 32172.22 00:09:01.652 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.652 Nvme2n1 : 1.04 8030.25 31.37 0.00 0.00 15721.49 11439.01 26214.40 00:09:01.652 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.652 Nvme2n2 : 1.04 8018.22 31.32 0.00 0.00 15699.13 11736.90 26810.18 00:09:01.652 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.652 Nvme2n3 : 1.04 8006.20 31.27 0.00 0.00 15684.80 11796.48 27048.49 00:09:01.652 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:01.652 Nvme3n1 : 1.04 7994.42 31.23 0.00 0.00 15652.87 9830.40 28120.90 00:09:01.652 [2024-11-15T11:20:44.601Z] =================================================================================================================== 00:09:01.652 [2024-11-15T11:20:44.601Z] Total : 56080.59 219.06 0.00 0.00 15764.15 9830.40 33125.47 00:09:02.586 00:09:02.586 real 0m3.242s 00:09:02.586 user 0m2.802s 00:09:02.586 sys 0m0.318s 00:09:02.586 11:20:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.586 11:20:45 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:02.586 ************************************ 00:09:02.586 END TEST bdev_write_zeroes 00:09:02.586 ************************************ 00:09:02.586 11:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.586 11:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:02.586 11:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.586 11:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.586 ************************************ 00:09:02.586 START TEST bdev_json_nonenclosed 00:09:02.586 ************************************ 00:09:02.586 11:20:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:02.844 [2024-11-15 11:20:45.635797] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:02.844 [2024-11-15 11:20:45.635987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63539 ] 00:09:03.103 [2024-11-15 11:20:45.816805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.103 [2024-11-15 11:20:45.922413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.103 [2024-11-15 11:20:45.922579] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:03.103 [2024-11-15 11:20:45.922607] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:03.103 [2024-11-15 11:20:45.922621] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.361 00:09:03.361 real 0m0.631s 00:09:03.361 user 0m0.382s 00:09:03.361 sys 0m0.143s 00:09:03.361 11:20:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.361 11:20:46 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:03.361 ************************************ 00:09:03.361 END TEST bdev_json_nonenclosed 00:09:03.361 ************************************ 00:09:03.361 11:20:46 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:03.361 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:03.361 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.361 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:03.361 ************************************ 00:09:03.361 START TEST bdev_json_nonarray 00:09:03.361 ************************************ 00:09:03.361 11:20:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:03.361 [2024-11-15 11:20:46.299767] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:03.361 [2024-11-15 11:20:46.299922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63570 ] 00:09:03.619 [2024-11-15 11:20:46.468265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.903 [2024-11-15 11:20:46.582389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.903 [2024-11-15 11:20:46.582555] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:03.903 [2024-11-15 11:20:46.582600] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:03.903 [2024-11-15 11:20:46.582620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.903 00:09:03.903 real 0m0.601s 00:09:03.903 user 0m0.375s 00:09:03.903 sys 0m0.121s 00:09:03.903 11:20:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.903 ************************************ 00:09:03.903 END TEST bdev_json_nonarray 00:09:03.903 ************************************ 00:09:03.903 11:20:46 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:04.161 11:20:46 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:04.161 11:20:46 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:04.161 11:20:46 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:04.161 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:04.161 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.161 11:20:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.161 ************************************ 00:09:04.161 START TEST bdev_gpt_uuid 00:09:04.161 ************************************ 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63596 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63596 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63596 ']' 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.161 11:20:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:04.161 [2024-11-15 11:20:47.007129] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:04.161 [2024-11-15 11:20:47.007353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63596 ] 00:09:04.418 [2024-11-15 11:20:47.190636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.418 [2024-11-15 11:20:47.305488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.352 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:05.352 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:09:05.352 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:05.352 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.352 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:05.610 Some configs were skipped because the RPC state that can call them passed over. 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:05.610 { 00:09:05.610 "name": "Nvme1n1p1", 00:09:05.610 "aliases": [ 00:09:05.610 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:05.610 ], 00:09:05.610 "product_name": "GPT Disk", 00:09:05.610 "block_size": 4096, 00:09:05.610 "num_blocks": 655104, 00:09:05.610 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:05.610 "assigned_rate_limits": { 00:09:05.610 "rw_ios_per_sec": 0, 00:09:05.610 "rw_mbytes_per_sec": 0, 00:09:05.610 "r_mbytes_per_sec": 0, 00:09:05.610 "w_mbytes_per_sec": 0 00:09:05.610 }, 00:09:05.610 "claimed": false, 00:09:05.610 "zoned": false, 00:09:05.610 "supported_io_types": { 00:09:05.610 "read": true, 00:09:05.610 "write": true, 00:09:05.610 "unmap": true, 00:09:05.610 "flush": true, 00:09:05.610 "reset": true, 00:09:05.610 "nvme_admin": false, 00:09:05.610 "nvme_io": false, 00:09:05.610 "nvme_io_md": false, 00:09:05.610 "write_zeroes": true, 00:09:05.610 "zcopy": false, 00:09:05.610 "get_zone_info": false, 00:09:05.610 "zone_management": false, 00:09:05.610 "zone_append": false, 00:09:05.610 "compare": true, 00:09:05.610 "compare_and_write": false, 00:09:05.610 "abort": true, 00:09:05.610 "seek_hole": false, 00:09:05.610 "seek_data": false, 00:09:05.610 "copy": true, 00:09:05.610 "nvme_iov_md": false 00:09:05.610 }, 00:09:05.610 "driver_specific": { 00:09:05.610 "gpt": { 00:09:05.610 "base_bdev": "Nvme1n1", 00:09:05.610 "offset_blocks": 256, 00:09:05.610 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:05.610 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:05.610 "partition_name": "SPDK_TEST_first" 00:09:05.610 } 00:09:05.610 } 00:09:05.610 } 00:09:05.610 ]' 00:09:05.610 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:05.868 { 00:09:05.868 "name": "Nvme1n1p2", 00:09:05.868 "aliases": [ 00:09:05.868 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:05.868 ], 00:09:05.868 "product_name": "GPT Disk", 00:09:05.868 "block_size": 4096, 00:09:05.868 "num_blocks": 655103, 00:09:05.868 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:05.868 "assigned_rate_limits": { 00:09:05.868 "rw_ios_per_sec": 0, 00:09:05.868 "rw_mbytes_per_sec": 0, 00:09:05.868 "r_mbytes_per_sec": 0, 00:09:05.868 "w_mbytes_per_sec": 0 00:09:05.868 }, 00:09:05.868 "claimed": false, 00:09:05.868 "zoned": false, 00:09:05.868 "supported_io_types": { 00:09:05.868 "read": true, 00:09:05.868 "write": true, 00:09:05.868 "unmap": true, 00:09:05.868 "flush": true, 00:09:05.868 "reset": true, 00:09:05.868 "nvme_admin": false, 00:09:05.868 "nvme_io": false, 00:09:05.868 "nvme_io_md": false, 00:09:05.868 "write_zeroes": true, 00:09:05.868 "zcopy": false, 00:09:05.868 "get_zone_info": false, 00:09:05.868 "zone_management": false, 00:09:05.868 "zone_append": false, 00:09:05.868 "compare": true, 00:09:05.868 "compare_and_write": false, 00:09:05.868 "abort": true, 00:09:05.868 "seek_hole": false, 00:09:05.868 "seek_data": false, 00:09:05.868 "copy": true, 00:09:05.868 "nvme_iov_md": false 00:09:05.868 }, 00:09:05.868 "driver_specific": { 00:09:05.868 "gpt": { 00:09:05.868 "base_bdev": "Nvme1n1", 00:09:05.868 "offset_blocks": 655360, 00:09:05.868 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:05.868 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:05.868 "partition_name": "SPDK_TEST_second" 00:09:05.868 } 00:09:05.868 } 00:09:05.868 } 00:09:05.868 ]' 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:05.868 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63596 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63596 ']' 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63596 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63596 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:06.126 killing process with pid 63596 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63596' 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63596 00:09:06.126 11:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63596 00:09:08.025 00:09:08.025 real 0m4.087s 00:09:08.025 user 0m4.350s 00:09:08.025 sys 0m0.575s 00:09:08.025 11:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.025 11:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:08.025 ************************************ 00:09:08.025 END TEST bdev_gpt_uuid 00:09:08.025 ************************************ 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:08.283 11:20:51 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:08.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.799 Waiting for block devices as requested 00:09:08.799 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.799 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.057 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:09.057 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:14.322 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:14.322 11:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:14.322 11:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:14.322 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:14.322 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:14.322 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:14.322 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:14.322 11:20:57 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:14.322 00:09:14.322 real 1m5.328s 00:09:14.322 user 1m23.636s 00:09:14.322 sys 0m10.765s 00:09:14.322 11:20:57 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.322 11:20:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:14.322 ************************************ 00:09:14.322 END TEST blockdev_nvme_gpt 00:09:14.322 ************************************ 00:09:14.322 11:20:57 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:14.322 11:20:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:14.322 11:20:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.322 11:20:57 -- common/autotest_common.sh@10 -- # set +x 00:09:14.322 ************************************ 00:09:14.322 START TEST nvme 00:09:14.322 ************************************ 00:09:14.322 11:20:57 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:14.580 * Looking for test storage... 00:09:14.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.580 11:20:57 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.580 11:20:57 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.580 11:20:57 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.580 11:20:57 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.580 11:20:57 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.580 11:20:57 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:14.580 11:20:57 nvme -- scripts/common.sh@345 -- # : 1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.580 11:20:57 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.580 11:20:57 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@353 -- # local d=1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.580 11:20:57 nvme -- scripts/common.sh@355 -- # echo 1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.580 11:20:57 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@353 -- # local d=2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.580 11:20:57 nvme -- scripts/common.sh@355 -- # echo 2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.580 11:20:57 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.580 11:20:57 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.580 11:20:57 nvme -- scripts/common.sh@368 -- # return 0 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.580 --rc genhtml_branch_coverage=1 00:09:14.580 --rc genhtml_function_coverage=1 00:09:14.580 --rc genhtml_legend=1 00:09:14.580 --rc geninfo_all_blocks=1 00:09:14.580 --rc geninfo_unexecuted_blocks=1 00:09:14.580 00:09:14.580 ' 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.580 --rc genhtml_branch_coverage=1 00:09:14.580 --rc genhtml_function_coverage=1 00:09:14.580 --rc genhtml_legend=1 00:09:14.580 --rc geninfo_all_blocks=1 00:09:14.580 --rc geninfo_unexecuted_blocks=1 00:09:14.580 00:09:14.580 ' 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.580 --rc genhtml_branch_coverage=1 00:09:14.580 --rc genhtml_function_coverage=1 00:09:14.580 --rc genhtml_legend=1 00:09:14.580 --rc geninfo_all_blocks=1 00:09:14.580 --rc geninfo_unexecuted_blocks=1 00:09:14.580 00:09:14.580 ' 00:09:14.580 11:20:57 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.580 --rc genhtml_branch_coverage=1 00:09:14.580 --rc genhtml_function_coverage=1 00:09:14.580 --rc genhtml_legend=1 00:09:14.580 --rc geninfo_all_blocks=1 00:09:14.580 --rc geninfo_unexecuted_blocks=1 00:09:14.580 00:09:14.580 ' 00:09:14.580 11:20:57 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:15.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:15.743 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.743 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.743 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.743 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:15.743 11:20:58 nvme -- nvme/nvme.sh@79 -- # uname 00:09:15.743 11:20:58 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:15.743 11:20:58 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:15.743 11:20:58 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1073 -- # stubpid=64245 00:09:15.743 Waiting for stub to ready for secondary processes... 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64245 ]] 00:09:15.743 11:20:58 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:16.001 [2024-11-15 11:20:58.730568] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:09:16.001 [2024-11-15 11:20:58.730749] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:16.936 11:20:59 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:16.936 11:20:59 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64245 ]] 00:09:16.936 11:20:59 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:17.195 [2024-11-15 11:21:00.068272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.453 [2024-11-15 11:21:00.179097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.453 [2024-11-15 11:21:00.179240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.453 [2024-11-15 11:21:00.179258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.453 [2024-11-15 11:21:00.199857] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:17.453 [2024-11-15 11:21:00.199906] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.453 [2024-11-15 11:21:00.213859] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:17.453 [2024-11-15 11:21:00.214017] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:17.453 [2024-11-15 11:21:00.217268] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.453 [2024-11-15 11:21:00.217581] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:17.453 [2024-11-15 11:21:00.217681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:17.453 [2024-11-15 11:21:00.220527] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.453 [2024-11-15 11:21:00.220773] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:17.453 [2024-11-15 11:21:00.220864] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:17.453 [2024-11-15 11:21:00.223799] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:17.453 [2024-11-15 11:21:00.224018] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:17.453 [2024-11-15 11:21:00.224117] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:17.453 [2024-11-15 11:21:00.224172] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:17.453 [2024-11-15 11:21:00.224219] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:18.020 11:21:00 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:18.020 done. 00:09:18.020 11:21:00 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:09:18.020 11:21:00 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:18.020 11:21:00 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:09:18.020 11:21:00 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.020 11:21:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.020 ************************************ 00:09:18.020 START TEST nvme_reset 00:09:18.020 ************************************ 00:09:18.020 11:21:00 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:18.279 Initializing NVMe Controllers 00:09:18.279 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:18.279 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:18.279 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:18.279 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:18.279 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:18.279 00:09:18.279 real 0m0.310s 00:09:18.279 user 0m0.108s 00:09:18.279 sys 0m0.159s 00:09:18.279 11:21:01 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.279 ************************************ 00:09:18.279 END TEST nvme_reset 00:09:18.279 ************************************ 00:09:18.279 11:21:01 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:18.279 11:21:01 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:18.279 11:21:01 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:18.279 11:21:01 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.279 11:21:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.279 ************************************ 00:09:18.279 START TEST nvme_identify 00:09:18.279 ************************************ 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:09:18.279 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:18.279 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:18.279 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:18.279 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:18.279 11:21:01 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:18.279 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:18.540 ===================================================== 00:09:18.540 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.540 ===================================================== 00:09:18.540 Controller Capabilities/Features 00:09:18.540 ================================ 00:09:18.540 Vendor ID: 1b36 00:09:18.540 Subsystem Vendor ID: 1af4 00:09:18.540 Serial Number: 12340 00:09:18.540 Model Number: QEMU NVMe Ctrl 00:09:18.540 Firmware Version: 8.0.0 00:09:18.540 Recommended Arb Burst: 6 00:09:18.540 IEEE OUI Identifier: 00 54 52 00:09:18.540 Multi-path I/O 00:09:18.540 May have multiple subsystem ports: No 00:09:18.540 May have multiple controllers: No 00:09:18.540 Associated with SR-IOV VF: No 00:09:18.540 Max Data Transfer Size: 524288 00:09:18.540 Max Number of Namespaces: 256 00:09:18.540 Max Number of I/O Queues: 64 00:09:18.540 NVMe Specification Version (VS): 1.4 00:09:18.540 NVMe Specification Version (Identify): 1.4 00:09:18.540 Maximum Queue Entries: 2048 00:09:18.540 Contiguous Queues Required: Yes 00:09:18.540 Arbitration Mechanisms Supported 00:09:18.540 Weighted Round Robin: Not Supported 00:09:18.540 Vendor Specific: Not Supported 00:09:18.540 Reset Timeout: 7500 ms 00:09:18.540 Doorbell Stride: 4 bytes 00:09:18.540 NVM Subsystem Reset: Not Supported 00:09:18.540 Command Sets Supported 00:09:18.540 NVM Command Set: Supported 00:09:18.540 Boot Partition: Not Supported 00:09:18.540 Memory Page Size Minimum: 4096 bytes 00:09:18.540 Memory Page Size Maximum: 65536 bytes 00:09:18.540 Persistent Memory Region: Not Supported 00:09:18.540 Optional Asynchronous Events Supported 00:09:18.540 Namespace Attribute Notices: Supported 00:09:18.540 Firmware Activation Notices: Not Supported 00:09:18.540 ANA Change Notices: Not Supported 00:09:18.540 PLE Aggregate Log Change Notices: Not Supported 00:09:18.540 LBA Status Info Alert Notices: Not Supported 00:09:18.540 EGE Aggregate Log Change Notices: Not Supported 00:09:18.540 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.540 Zone Descriptor Change Notices: Not Supported 00:09:18.540 Discovery Log Change Notices: Not Supported 00:09:18.540 Controller Attributes 00:09:18.540 128-bit Host Identifier: Not Supported 00:09:18.540 Non-Operational Permissive Mode: Not Supported 00:09:18.540 NVM Sets: Not Supported 00:09:18.540 Read Recovery Levels: Not Supported 00:09:18.540 Endurance Groups: Not Supported 00:09:18.540 Predictable Latency Mode: Not Supported 00:09:18.540 Traffic Based Keep ALive: Not Supported 00:09:18.540 Namespace Granularity: Not Supported 00:09:18.540 SQ Associations: Not Supported 00:09:18.540 UUID List: Not Supported 00:09:18.540 Multi-Domain Subsystem: Not Supported 00:09:18.540 Fixed Capacity Management: Not Supported 00:09:18.540 Variable Capacity Management: Not Supported 00:09:18.540 Delete Endurance Group: Not Supported 00:09:18.540 Delete NVM Set: Not Supported 00:09:18.540 Extended LBA Formats Supported: Supported 00:09:18.540 Flexible Data Placement Supported: Not Supported 00:09:18.540 00:09:18.540 Controller Memory Buffer Support 00:09:18.540 ================================ 00:09:18.540 Supported: No 00:09:18.540 00:09:18.540 Persistent Memory Region Support 00:09:18.540 ================================ 00:09:18.540 Supported: No 00:09:18.540 00:09:18.540 Admin Command Set Attributes 00:09:18.540 ============================ 00:09:18.540 Security Send/Receive: Not Supported 00:09:18.540 Format NVM: Supported 00:09:18.540 Firmware Activate/Download: Not Supported 00:09:18.540 Namespace Management: Supported 00:09:18.540 Device Self-Test: Not Supported 00:09:18.540 Directives: Supported 00:09:18.540 NVMe-MI: Not Supported 00:09:18.540 Virtualization Management: Not Supported 00:09:18.540 Doorbell Buffer Config: Supported 00:09:18.540 Get LBA Status Capability: Not Supported 00:09:18.540 Command & Feature Lockdown Capability: Not Supported 00:09:18.540 Abort Command Limit: 4 00:09:18.540 Async Event Request Limit: 4 00:09:18.540 Number of Firmware Slots: N/A 00:09:18.540 Firmware Slot 1 Read-Only: N/A 00:09:18.540 Firmware Activation Without Reset: N/A 00:09:18.540 Multiple Update Detection Support: N/A 00:09:18.540 Firmware Update Granularity: No Information Provided 00:09:18.540 Per-Namespace SMART Log: Yes 00:09:18.540 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.540 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:18.540 Command Effects Log Page: Supported 00:09:18.540 Get Log Page Extended Data: Supported 00:09:18.540 Telemetry Log Pages: Not Supported 00:09:18.540 Persistent Event Log Pages: Not Supported 00:09:18.540 Supported Log Pages Log Page: May Support 00:09:18.540 Commands Supported & Effects Log Page: Not Supported 00:09:18.540 Feature Identifiers & Effects Log Page:May Support 00:09:18.540 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.540 Data Area 4 for Telemetry Log: Not Supported 00:09:18.540 Error Log Page Entries Supported: 1 00:09:18.540 Keep Alive: Not Supported 00:09:18.540 00:09:18.540 NVM Command Set Attributes 00:09:18.540 ========================== 00:09:18.540 Submission Queue Entry Size 00:09:18.540 Max: 64 00:09:18.540 Min: 64 00:09:18.540 Completion Queue Entry Size 00:09:18.540 Max: 16 00:09:18.540 Min: 16 00:09:18.540 Number of Namespaces: 256 00:09:18.540 Compare Command: Supported 00:09:18.540 Write Uncorrectable Command: Not Supported 00:09:18.540 Dataset Management Command: Supported 00:09:18.540 Write Zeroes Command: Supported 00:09:18.540 Set Features Save Field: Supported 00:09:18.540 Reservations: Not Supported 00:09:18.540 Timestamp: Supported 00:09:18.540 Copy: Supported 00:09:18.540 Volatile Write Cache: Present 00:09:18.540 Atomic Write Unit (Normal): 1 00:09:18.540 Atomic Write Unit (PFail): 1 00:09:18.540 Atomic Compare & Write Unit: 1 00:09:18.540 Fused Compare & Write: Not Supported 00:09:18.540 Scatter-Gather List 00:09:18.540 SGL Command Set: Supported 00:09:18.540 SGL Keyed: Not Supported 00:09:18.540 SGL Bit Bucket Descriptor: Not Supported 00:09:18.540 SGL Metadata Pointer: Not Supported 00:09:18.540 Oversized SGL: Not Supported 00:09:18.540 SGL Metadata Address: Not Supported 00:09:18.540 SGL Offset: Not Supported 00:09:18.540 Transport SGL Data Block: Not Supported 00:09:18.540 Replay Protected Memory Block: Not Supported 00:09:18.540 00:09:18.540 Firmware Slot Information 00:09:18.540 ========================= 00:09:18.540 Active slot: 1 00:09:18.540 Slot 1 Firmware Revision: 1.0 00:09:18.540 00:09:18.540 00:09:18.540 Commands Supported and Effects 00:09:18.540 ============================== 00:09:18.540 Admin Commands 00:09:18.540 -------------- 00:09:18.540 Delete I/O Submission Queue (00h): Supported 00:09:18.540 Create I/O Submission Queue (01h): Supported 00:09:18.540 Get Log Page (02h): Supported 00:09:18.540 Delete I/O Completion Queue (04h): Supported 00:09:18.540 Create I/O Completion Queue (05h): Supported 00:09:18.540 Identify (06h): Supported 00:09:18.540 Abort (08h): Supported 00:09:18.540 Set Features (09h): Supported 00:09:18.540 Get Features (0Ah): Supported 00:09:18.540 Asynchronous Event Request (0Ch): Supported 00:09:18.540 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.540 Directive Send (19h): Supported 00:09:18.540 Directive Receive (1Ah): Supported 00:09:18.540 Virtualization Management (1Ch): Supported 00:09:18.540 Doorbell Buffer Config (7Ch): Supported 00:09:18.540 Format NVM (80h): Supported LBA-Change 00:09:18.540 I/O Commands 00:09:18.540 ------------ 00:09:18.540 Flush (00h): Supported LBA-Change 00:09:18.540 Write (01h): Supported LBA-Change 00:09:18.540 Read (02h): Supported 00:09:18.540 Compare (05h): Supported 00:09:18.540 Write Zeroes (08h): Supported LBA-Change 00:09:18.540 Dataset Management (09h): Supported LBA-Change 00:09:18.540 Unknown (0Ch): Supported 00:09:18.540 Unknown (12h): Supported 00:09:18.540 Copy (19h): Supported LBA-Change 00:09:18.540 Unknown (1Dh): Supported LBA-Change 00:09:18.540 00:09:18.540 Error Log 00:09:18.540 ========= 00:09:18.540 00:09:18.540 Arbitration 00:09:18.540 =========== 00:09:18.540 Arbitration Burst: no limit 00:09:18.540 00:09:18.540 Power Management 00:09:18.540 ================ 00:09:18.540 Number of Power States: 1 00:09:18.540 Current Power State: Power State #0 00:09:18.540 Power State #0: 00:09:18.540 Max Power: 25.00 W 00:09:18.540 Non-Operational State: Operational 00:09:18.540 Entry Latency: 16 microseconds 00:09:18.540 Exit Latency: 4 microseconds 00:09:18.540 Relative Read Throughput: 0 00:09:18.540 Relative Read Latency: 0 00:09:18.540 Relative Write Throughput: 0 00:09:18.540 Relative Write Latency: 0 00:09:18.540 Idle Power[2024-11-15 11:21:01.394946] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64279 terminated unexpected 00:09:18.540 [2024-11-15 11:21:01.396344] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64279 terminated unexpected 00:09:18.540 : Not Reported 00:09:18.540 Active Power: Not Reported 00:09:18.540 Non-Operational Permissive Mode: Not Supported 00:09:18.540 00:09:18.540 Health Information 00:09:18.540 ================== 00:09:18.540 Critical Warnings: 00:09:18.540 Available Spare Space: OK 00:09:18.540 Temperature: OK 00:09:18.540 Device Reliability: OK 00:09:18.540 Read Only: No 00:09:18.540 Volatile Memory Backup: OK 00:09:18.540 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.540 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.540 Available Spare: 0% 00:09:18.540 Available Spare Threshold: 0% 00:09:18.540 Life Percentage Used: 0% 00:09:18.540 Data Units Read: 635 00:09:18.540 Data Units Written: 563 00:09:18.540 Host Read Commands: 33116 00:09:18.540 Host Write Commands: 32902 00:09:18.540 Controller Busy Time: 0 minutes 00:09:18.540 Power Cycles: 0 00:09:18.540 Power On Hours: 0 hours 00:09:18.540 Unsafe Shutdowns: 0 00:09:18.540 Unrecoverable Media Errors: 0 00:09:18.540 Lifetime Error Log Entries: 0 00:09:18.540 Warning Temperature Time: 0 minutes 00:09:18.540 Critical Temperature Time: 0 minutes 00:09:18.540 00:09:18.540 Number of Queues 00:09:18.540 ================ 00:09:18.540 Number of I/O Submission Queues: 64 00:09:18.540 Number of I/O Completion Queues: 64 00:09:18.540 00:09:18.540 ZNS Specific Controller Data 00:09:18.540 ============================ 00:09:18.540 Zone Append Size Limit: 0 00:09:18.540 00:09:18.540 00:09:18.540 Active Namespaces 00:09:18.540 ================= 00:09:18.540 Namespace ID:1 00:09:18.540 Error Recovery Timeout: Unlimited 00:09:18.540 Command Set Identifier: NVM (00h) 00:09:18.540 Deallocate: Supported 00:09:18.540 Deallocated/Unwritten Error: Supported 00:09:18.540 Deallocated Read Value: All 0x00 00:09:18.540 Deallocate in Write Zeroes: Not Supported 00:09:18.540 Deallocated Guard Field: 0xFFFF 00:09:18.540 Flush: Supported 00:09:18.540 Reservation: Not Supported 00:09:18.540 Metadata Transferred as: Separate Metadata Buffer 00:09:18.540 Namespace Sharing Capabilities: Private 00:09:18.540 Size (in LBAs): 1548666 (5GiB) 00:09:18.540 Capacity (in LBAs): 1548666 (5GiB) 00:09:18.540 Utilization (in LBAs): 1548666 (5GiB) 00:09:18.540 Thin Provisioning: Not Supported 00:09:18.540 Per-NS Atomic Units: No 00:09:18.540 Maximum Single Source Range Length: 128 00:09:18.540 Maximum Copy Length: 128 00:09:18.540 Maximum Source Range Count: 128 00:09:18.540 NGUID/EUI64 Never Reused: No 00:09:18.540 Namespace Write Protected: No 00:09:18.540 Number of LBA Formats: 8 00:09:18.540 Current LBA Format: LBA Format #07 00:09:18.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.540 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.540 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.540 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.540 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.540 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.540 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.540 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.540 00:09:18.540 NVM Specific Namespace Data 00:09:18.541 =========================== 00:09:18.541 Logical Block Storage Tag Mask: 0 00:09:18.541 Protection Information Capabilities: 00:09:18.541 16b Guard Protection Information Storage Tag Support: No 00:09:18.541 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.541 Storage Tag Check Read Support: No 00:09:18.541 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 ===================================================== 00:09:18.541 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.541 ===================================================== 00:09:18.541 Controller Capabilities/Features 00:09:18.541 ================================ 00:09:18.541 Vendor ID: 1b36 00:09:18.541 Subsystem Vendor ID: 1af4 00:09:18.541 Serial Number: 12341 00:09:18.541 Model Number: QEMU NVMe Ctrl 00:09:18.541 Firmware Version: 8.0.0 00:09:18.541 Recommended Arb Burst: 6 00:09:18.541 IEEE OUI Identifier: 00 54 52 00:09:18.541 Multi-path I/O 00:09:18.541 May have multiple subsystem ports: No 00:09:18.541 May have multiple controllers: No 00:09:18.541 Associated with SR-IOV VF: No 00:09:18.541 Max Data Transfer Size: 524288 00:09:18.541 Max Number of Namespaces: 256 00:09:18.541 Max Number of I/O Queues: 64 00:09:18.541 NVMe Specification Version (VS): 1.4 00:09:18.541 NVMe Specification Version (Identify): 1.4 00:09:18.541 Maximum Queue Entries: 2048 00:09:18.541 Contiguous Queues Required: Yes 00:09:18.541 Arbitration Mechanisms Supported 00:09:18.541 Weighted Round Robin: Not Supported 00:09:18.541 Vendor Specific: Not Supported 00:09:18.541 Reset Timeout: 7500 ms 00:09:18.541 Doorbell Stride: 4 bytes 00:09:18.541 NVM Subsystem Reset: Not Supported 00:09:18.541 Command Sets Supported 00:09:18.541 NVM Command Set: Supported 00:09:18.541 Boot Partition: Not Supported 00:09:18.541 Memory Page Size Minimum: 4096 bytes 00:09:18.541 Memory Page Size Maximum: 65536 bytes 00:09:18.541 Persistent Memory Region: Not Supported 00:09:18.541 Optional Asynchronous Events Supported 00:09:18.541 Namespace Attribute Notices: Supported 00:09:18.541 Firmware Activation Notices: Not Supported 00:09:18.541 ANA Change Notices: Not Supported 00:09:18.541 PLE Aggregate Log Change Notices: Not Supported 00:09:18.541 LBA Status Info Alert Notices: Not Supported 00:09:18.541 EGE Aggregate Log Change Notices: Not Supported 00:09:18.541 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.541 Zone Descriptor Change Notices: Not Supported 00:09:18.541 Discovery Log Change Notices: Not Supported 00:09:18.541 Controller Attributes 00:09:18.541 128-bit Host Identifier: Not Supported 00:09:18.541 Non-Operational Permissive Mode: Not Supported 00:09:18.541 NVM Sets: Not Supported 00:09:18.541 Read Recovery Levels: Not Supported 00:09:18.541 Endurance Groups: Not Supported 00:09:18.541 Predictable Latency Mode: Not Supported 00:09:18.541 Traffic Based Keep ALive: Not Supported 00:09:18.541 Namespace Granularity: Not Supported 00:09:18.541 SQ Associations: Not Supported 00:09:18.541 UUID List: Not Supported 00:09:18.541 Multi-Domain Subsystem: Not Supported 00:09:18.541 Fixed Capacity Management: Not Supported 00:09:18.541 Variable Capacity Management: Not Supported 00:09:18.541 Delete Endurance Group: Not Supported 00:09:18.541 Delete NVM Set: Not Supported 00:09:18.541 Extended LBA Formats Supported: Supported 00:09:18.541 Flexible Data Placement Supported: Not Supported 00:09:18.541 00:09:18.541 Controller Memory Buffer Support 00:09:18.541 ================================ 00:09:18.541 Supported: No 00:09:18.541 00:09:18.541 Persistent Memory Region Support 00:09:18.541 ================================ 00:09:18.541 Supported: No 00:09:18.541 00:09:18.541 Admin Command Set Attributes 00:09:18.541 ============================ 00:09:18.541 Security Send/Receive: Not Supported 00:09:18.541 Format NVM: Supported 00:09:18.541 Firmware Activate/Download: Not Supported 00:09:18.541 Namespace Management: Supported 00:09:18.541 Device Self-Test: Not Supported 00:09:18.541 Directives: Supported 00:09:18.541 NVMe-MI: Not Supported 00:09:18.541 Virtualization Management: Not Supported 00:09:18.541 Doorbell Buffer Config: Supported 00:09:18.541 Get LBA Status Capability: Not Supported 00:09:18.541 Command & Feature Lockdown Capability: Not Supported 00:09:18.541 Abort Command Limit: 4 00:09:18.541 Async Event Request Limit: 4 00:09:18.541 Number of Firmware Slots: N/A 00:09:18.541 Firmware Slot 1 Read-Only: N/A 00:09:18.541 Firmware Activation Without Reset: N/A 00:09:18.541 Multiple Update Detection Support: N/A 00:09:18.541 Firmware Update Granularity: No Information Provided 00:09:18.541 Per-Namespace SMART Log: Yes 00:09:18.541 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.541 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:18.541 Command Effects Log Page: Supported 00:09:18.541 Get Log Page Extended Data: Supported 00:09:18.541 Telemetry Log Pages: Not Supported 00:09:18.541 Persistent Event Log Pages: Not Supported 00:09:18.541 Supported Log Pages Log Page: May Support 00:09:18.541 Commands Supported & Effects Log Page: Not Supported 00:09:18.541 Feature Identifiers & Effects Log Page:May Support 00:09:18.541 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.541 Data Area 4 for Telemetry Log: Not Supported 00:09:18.541 Error Log Page Entries Supported: 1 00:09:18.541 Keep Alive: Not Supported 00:09:18.541 00:09:18.541 NVM Command Set Attributes 00:09:18.541 ========================== 00:09:18.541 Submission Queue Entry Size 00:09:18.541 Max: 64 00:09:18.541 Min: 64 00:09:18.541 Completion Queue Entry Size 00:09:18.541 Max: 16 00:09:18.541 Min: 16 00:09:18.541 Number of Namespaces: 256 00:09:18.541 Compare Command: Supported 00:09:18.541 Write Uncorrectable Command: Not Supported 00:09:18.541 Dataset Management Command: Supported 00:09:18.541 Write Zeroes Command: Supported 00:09:18.541 Set Features Save Field: Supported 00:09:18.541 Reservations: Not Supported 00:09:18.541 Timestamp: Supported 00:09:18.541 Copy: Supported 00:09:18.541 Volatile Write Cache: Present 00:09:18.541 Atomic Write Unit (Normal): 1 00:09:18.541 Atomic Write Unit (PFail): 1 00:09:18.541 Atomic Compare & Write Unit: 1 00:09:18.541 Fused Compare & Write: Not Supported 00:09:18.541 Scatter-Gather List 00:09:18.541 SGL Command Set: Supported 00:09:18.541 SGL Keyed: Not Supported 00:09:18.541 SGL Bit Bucket Descriptor: Not Supported 00:09:18.541 SGL Metadata Pointer: Not Supported 00:09:18.541 Oversized SGL: Not Supported 00:09:18.541 SGL Metadata Address: Not Supported 00:09:18.541 SGL Offset: Not Supported 00:09:18.541 Transport SGL Data Block: Not Supported 00:09:18.541 Replay Protected Memory Block: Not Supported 00:09:18.541 00:09:18.541 Firmware Slot Information 00:09:18.541 ========================= 00:09:18.541 Active slot: 1 00:09:18.541 Slot 1 Firmware Revision: 1.0 00:09:18.541 00:09:18.541 00:09:18.541 Commands Supported and Effects 00:09:18.541 ============================== 00:09:18.541 Admin Commands 00:09:18.541 -------------- 00:09:18.541 Delete I/O Submission Queue (00h): Supported 00:09:18.541 Create I/O Submission Queue (01h): Supported 00:09:18.541 Get Log Page (02h): Supported 00:09:18.541 Delete I/O Completion Queue (04h): Supported 00:09:18.541 Create I/O Completion Queue (05h): Supported 00:09:18.541 Identify (06h): Supported 00:09:18.541 Abort (08h): Supported 00:09:18.541 Set Features (09h): Supported 00:09:18.541 Get Features (0Ah): Supported 00:09:18.541 Asynchronous Event Request (0Ch): Supported 00:09:18.541 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.541 Directive Send (19h): Supported 00:09:18.541 Directive Receive (1Ah): Supported 00:09:18.541 Virtualization Management (1Ch): Supported 00:09:18.541 Doorbell Buffer Config (7Ch): Supported 00:09:18.541 Format NVM (80h): Supported LBA-Change 00:09:18.541 I/O Commands 00:09:18.541 ------------ 00:09:18.541 Flush (00h): Supported LBA-Change 00:09:18.541 Write (01h): Supported LBA-Change 00:09:18.541 Read (02h): Supported 00:09:18.541 Compare (05h): Supported 00:09:18.541 Write Zeroes (08h): Supported LBA-Change 00:09:18.541 Dataset Management (09h): Supported LBA-Change 00:09:18.541 Unknown (0Ch): Supported 00:09:18.541 Unknown (12h): Supported 00:09:18.541 Copy (19h): Supported LBA-Change 00:09:18.541 Unknown (1Dh): Supported LBA-Change 00:09:18.541 00:09:18.541 Error Log 00:09:18.541 ========= 00:09:18.541 00:09:18.541 Arbitration 00:09:18.541 =========== 00:09:18.541 Arbitration Burst: no limit 00:09:18.541 00:09:18.541 Power Management 00:09:18.541 ================ 00:09:18.541 Number of Power States: 1 00:09:18.541 Current Power State: Power State #0 00:09:18.541 Power State #0: 00:09:18.541 Max Power: 25.00 W 00:09:18.541 Non-Operational State: Operational 00:09:18.541 Entry Latency: 16 microseconds 00:09:18.541 Exit Latency: 4 microseconds 00:09:18.541 Relative Read Throughput: 0 00:09:18.541 Relative Read Latency: 0 00:09:18.541 Relative Write Throughput: 0 00:09:18.541 Relative Write Latency: 0 00:09:18.541 Idle Power: Not Reported 00:09:18.541 Active Power: Not Reported 00:09:18.541 Non-Operational Permissive Mode: Not Supported 00:09:18.541 00:09:18.541 Health Information 00:09:18.541 ================== 00:09:18.541 Critical Warnings: 00:09:18.541 Available Spare Space: OK 00:09:18.541 Temperature: [2024-11-15 11:21:01.397878] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64279 terminated unexpected 00:09:18.541 OK 00:09:18.541 Device Reliability: OK 00:09:18.541 Read Only: No 00:09:18.541 Volatile Memory Backup: OK 00:09:18.541 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.541 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.541 Available Spare: 0% 00:09:18.541 Available Spare Threshold: 0% 00:09:18.541 Life Percentage Used: 0% 00:09:18.541 Data Units Read: 1041 00:09:18.541 Data Units Written: 913 00:09:18.541 Host Read Commands: 50002 00:09:18.541 Host Write Commands: 48839 00:09:18.541 Controller Busy Time: 0 minutes 00:09:18.541 Power Cycles: 0 00:09:18.541 Power On Hours: 0 hours 00:09:18.541 Unsafe Shutdowns: 0 00:09:18.541 Unrecoverable Media Errors: 0 00:09:18.541 Lifetime Error Log Entries: 0 00:09:18.541 Warning Temperature Time: 0 minutes 00:09:18.541 Critical Temperature Time: 0 minutes 00:09:18.541 00:09:18.541 Number of Queues 00:09:18.541 ================ 00:09:18.541 Number of I/O Submission Queues: 64 00:09:18.541 Number of I/O Completion Queues: 64 00:09:18.541 00:09:18.541 ZNS Specific Controller Data 00:09:18.541 ============================ 00:09:18.541 Zone Append Size Limit: 0 00:09:18.541 00:09:18.541 00:09:18.541 Active Namespaces 00:09:18.541 ================= 00:09:18.541 Namespace ID:1 00:09:18.541 Error Recovery Timeout: Unlimited 00:09:18.541 Command Set Identifier: NVM (00h) 00:09:18.541 Deallocate: Supported 00:09:18.541 Deallocated/Unwritten Error: Supported 00:09:18.541 Deallocated Read Value: All 0x00 00:09:18.541 Deallocate in Write Zeroes: Not Supported 00:09:18.541 Deallocated Guard Field: 0xFFFF 00:09:18.541 Flush: Supported 00:09:18.541 Reservation: Not Supported 00:09:18.541 Namespace Sharing Capabilities: Private 00:09:18.541 Size (in LBAs): 1310720 (5GiB) 00:09:18.541 Capacity (in LBAs): 1310720 (5GiB) 00:09:18.541 Utilization (in LBAs): 1310720 (5GiB) 00:09:18.541 Thin Provisioning: Not Supported 00:09:18.541 Per-NS Atomic Units: No 00:09:18.541 Maximum Single Source Range Length: 128 00:09:18.541 Maximum Copy Length: 128 00:09:18.541 Maximum Source Range Count: 128 00:09:18.541 NGUID/EUI64 Never Reused: No 00:09:18.541 Namespace Write Protected: No 00:09:18.541 Number of LBA Formats: 8 00:09:18.541 Current LBA Format: LBA Format #04 00:09:18.541 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.541 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.541 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.541 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.541 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.541 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.541 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.541 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.541 00:09:18.541 NVM Specific Namespace Data 00:09:18.541 =========================== 00:09:18.541 Logical Block Storage Tag Mask: 0 00:09:18.541 Protection Information Capabilities: 00:09:18.541 16b Guard Protection Information Storage Tag Support: No 00:09:18.541 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.541 Storage Tag Check Read Support: No 00:09:18.541 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.541 ===================================================== 00:09:18.541 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.541 ===================================================== 00:09:18.541 Controller Capabilities/Features 00:09:18.541 ================================ 00:09:18.541 Vendor ID: 1b36 00:09:18.541 Subsystem Vendor ID: 1af4 00:09:18.541 Serial Number: 12343 00:09:18.541 Model Number: QEMU NVMe Ctrl 00:09:18.541 Firmware Version: 8.0.0 00:09:18.541 Recommended Arb Burst: 6 00:09:18.541 IEEE OUI Identifier: 00 54 52 00:09:18.541 Multi-path I/O 00:09:18.541 May have multiple subsystem ports: No 00:09:18.541 May have multiple controllers: Yes 00:09:18.541 Associated with SR-IOV VF: No 00:09:18.541 Max Data Transfer Size: 524288 00:09:18.541 Max Number of Namespaces: 256 00:09:18.541 Max Number of I/O Queues: 64 00:09:18.541 NVMe Specification Version (VS): 1.4 00:09:18.541 NVMe Specification Version (Identify): 1.4 00:09:18.541 Maximum Queue Entries: 2048 00:09:18.541 Contiguous Queues Required: Yes 00:09:18.541 Arbitration Mechanisms Supported 00:09:18.541 Weighted Round Robin: Not Supported 00:09:18.541 Vendor Specific: Not Supported 00:09:18.541 Reset Timeout: 7500 ms 00:09:18.541 Doorbell Stride: 4 bytes 00:09:18.541 NVM Subsystem Reset: Not Supported 00:09:18.541 Command Sets Supported 00:09:18.541 NVM Command Set: Supported 00:09:18.541 Boot Partition: Not Supported 00:09:18.541 Memory Page Size Minimum: 4096 bytes 00:09:18.541 Memory Page Size Maximum: 65536 bytes 00:09:18.541 Persistent Memory Region: Not Supported 00:09:18.541 Optional Asynchronous Events Supported 00:09:18.541 Namespace Attribute Notices: Supported 00:09:18.541 Firmware Activation Notices: Not Supported 00:09:18.541 ANA Change Notices: Not Supported 00:09:18.541 PLE Aggregate Log Change Notices: Not Supported 00:09:18.541 LBA Status Info Alert Notices: Not Supported 00:09:18.541 EGE Aggregate Log Change Notices: Not Supported 00:09:18.541 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.541 Zone Descriptor Change Notices: Not Supported 00:09:18.541 Discovery Log Change Notices: Not Supported 00:09:18.541 Controller Attributes 00:09:18.541 128-bit Host Identifier: Not Supported 00:09:18.541 Non-Operational Permissive Mode: Not Supported 00:09:18.541 NVM Sets: Not Supported 00:09:18.541 Read Recovery Levels: Not Supported 00:09:18.541 Endurance Groups: Supported 00:09:18.541 Predictable Latency Mode: Not Supported 00:09:18.541 Traffic Based Keep ALive: Not Supported 00:09:18.541 Namespace Granularity: Not Supported 00:09:18.541 SQ Associations: Not Supported 00:09:18.541 UUID List: Not Supported 00:09:18.541 Multi-Domain Subsystem: Not Supported 00:09:18.541 Fixed Capacity Management: Not Supported 00:09:18.541 Variable Capacity Management: Not Supported 00:09:18.541 Delete Endurance Group: Not Supported 00:09:18.541 Delete NVM Set: Not Supported 00:09:18.541 Extended LBA Formats Supported: Supported 00:09:18.541 Flexible Data Placement Supported: Supported 00:09:18.541 00:09:18.541 Controller Memory Buffer Support 00:09:18.541 ================================ 00:09:18.541 Supported: No 00:09:18.541 00:09:18.541 Persistent Memory Region Support 00:09:18.541 ================================ 00:09:18.541 Supported: No 00:09:18.541 00:09:18.541 Admin Command Set Attributes 00:09:18.541 ============================ 00:09:18.541 Security Send/Receive: Not Supported 00:09:18.542 Format NVM: Supported 00:09:18.542 Firmware Activate/Download: Not Supported 00:09:18.542 Namespace Management: Supported 00:09:18.542 Device Self-Test: Not Supported 00:09:18.542 Directives: Supported 00:09:18.542 NVMe-MI: Not Supported 00:09:18.542 Virtualization Management: Not Supported 00:09:18.542 Doorbell Buffer Config: Supported 00:09:18.542 Get LBA Status Capability: Not Supported 00:09:18.542 Command & Feature Lockdown Capability: Not Supported 00:09:18.542 Abort Command Limit: 4 00:09:18.542 Async Event Request Limit: 4 00:09:18.542 Number of Firmware Slots: N/A 00:09:18.542 Firmware Slot 1 Read-Only: N/A 00:09:18.542 Firmware Activation Without Reset: N/A 00:09:18.542 Multiple Update Detection Support: N/A 00:09:18.542 Firmware Update Granularity: No Information Provided 00:09:18.542 Per-Namespace SMART Log: Yes 00:09:18.542 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.542 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:18.542 Command Effects Log Page: Supported 00:09:18.542 Get Log Page Extended Data: Supported 00:09:18.542 Telemetry Log Pages: Not Supported 00:09:18.542 Persistent Event Log Pages: Not Supported 00:09:18.542 Supported Log Pages Log Page: May Support 00:09:18.542 Commands Supported & Effects Log Page: Not Supported 00:09:18.542 Feature Identifiers & Effects Log Page:May Support 00:09:18.542 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.542 Data Area 4 for Telemetry Log: Not Supported 00:09:18.542 Error Log Page Entries Supported: 1 00:09:18.542 Keep Alive: Not Supported 00:09:18.542 00:09:18.542 NVM Command Set Attributes 00:09:18.542 ========================== 00:09:18.542 Submission Queue Entry Size 00:09:18.542 Max: 64 00:09:18.542 Min: 64 00:09:18.542 Completion Queue Entry Size 00:09:18.542 Max: 16 00:09:18.542 Min: 16 00:09:18.542 Number of Namespaces: 256 00:09:18.542 Compare Command: Supported 00:09:18.542 Write Uncorrectable Command: Not Supported 00:09:18.542 Dataset Management Command: Supported 00:09:18.542 Write Zeroes Command: Supported 00:09:18.542 Set Features Save Field: Supported 00:09:18.542 Reservations: Not Supported 00:09:18.542 Timestamp: Supported 00:09:18.542 Copy: Supported 00:09:18.542 Volatile Write Cache: Present 00:09:18.542 Atomic Write Unit (Normal): 1 00:09:18.542 Atomic Write Unit (PFail): 1 00:09:18.542 Atomic Compare & Write Unit: 1 00:09:18.542 Fused Compare & Write: Not Supported 00:09:18.542 Scatter-Gather List 00:09:18.542 SGL Command Set: Supported 00:09:18.542 SGL Keyed: Not Supported 00:09:18.542 SGL Bit Bucket Descriptor: Not Supported 00:09:18.542 SGL Metadata Pointer: Not Supported 00:09:18.542 Oversized SGL: Not Supported 00:09:18.542 SGL Metadata Address: Not Supported 00:09:18.542 SGL Offset: Not Supported 00:09:18.542 Transport SGL Data Block: Not Supported 00:09:18.542 Replay Protected Memory Block: Not Supported 00:09:18.542 00:09:18.542 Firmware Slot Information 00:09:18.542 ========================= 00:09:18.542 Active slot: 1 00:09:18.542 Slot 1 Firmware Revision: 1.0 00:09:18.542 00:09:18.542 00:09:18.542 Commands Supported and Effects 00:09:18.542 ============================== 00:09:18.542 Admin Commands 00:09:18.542 -------------- 00:09:18.542 Delete I/O Submission Queue (00h): Supported 00:09:18.542 Create I/O Submission Queue (01h): Supported 00:09:18.542 Get Log Page (02h): Supported 00:09:18.542 Delete I/O Completion Queue (04h): Supported 00:09:18.542 Create I/O Completion Queue (05h): Supported 00:09:18.542 Identify (06h): Supported 00:09:18.542 Abort (08h): Supported 00:09:18.542 Set Features (09h): Supported 00:09:18.542 Get Features (0Ah): Supported 00:09:18.542 Asynchronous Event Request (0Ch): Supported 00:09:18.542 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.542 Directive Send (19h): Supported 00:09:18.542 Directive Receive (1Ah): Supported 00:09:18.542 Virtualization Management (1Ch): Supported 00:09:18.542 Doorbell Buffer Config (7Ch): Supported 00:09:18.542 Format NVM (80h): Supported LBA-Change 00:09:18.542 I/O Commands 00:09:18.542 ------------ 00:09:18.542 Flush (00h): Supported LBA-Change 00:09:18.542 Write (01h): Supported LBA-Change 00:09:18.542 Read (02h): Supported 00:09:18.542 Compare (05h): Supported 00:09:18.542 Write Zeroes (08h): Supported LBA-Change 00:09:18.542 Dataset Management (09h): Supported LBA-Change 00:09:18.542 Unknown (0Ch): Supported 00:09:18.542 Unknown (12h): Supported 00:09:18.542 Copy (19h): Supported LBA-Change 00:09:18.542 Unknown (1Dh): Supported LBA-Change 00:09:18.542 00:09:18.542 Error Log 00:09:18.542 ========= 00:09:18.542 00:09:18.542 Arbitration 00:09:18.542 =========== 00:09:18.542 Arbitration Burst: no limit 00:09:18.542 00:09:18.542 Power Management 00:09:18.542 ================ 00:09:18.542 Number of Power States: 1 00:09:18.542 Current Power State: Power State #0 00:09:18.542 Power State #0: 00:09:18.542 Max Power: 25.00 W 00:09:18.542 Non-Operational State: Operational 00:09:18.542 Entry Latency: 16 microseconds 00:09:18.542 Exit Latency: 4 microseconds 00:09:18.542 Relative Read Throughput: 0 00:09:18.542 Relative Read Latency: 0 00:09:18.542 Relative Write Throughput: 0 00:09:18.542 Relative Write Latency: 0 00:09:18.542 Idle Power: Not Reported 00:09:18.542 Active Power: Not Reported 00:09:18.542 Non-Operational Permissive Mode: Not Supported 00:09:18.542 00:09:18.542 Health Information 00:09:18.542 ================== 00:09:18.542 Critical Warnings: 00:09:18.542 Available Spare Space: OK 00:09:18.542 Temperature: OK 00:09:18.542 Device Reliability: OK 00:09:18.542 Read Only: No 00:09:18.542 Volatile Memory Backup: OK 00:09:18.542 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.542 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.542 Available Spare: 0% 00:09:18.542 Available Spare Threshold: 0% 00:09:18.542 Life Percentage Used: 0% 00:09:18.542 Data Units Read: 785 00:09:18.542 Data Units Written: 714 00:09:18.542 Host Read Commands: 34527 00:09:18.542 Host Write Commands: 33950 00:09:18.542 Controller Busy Time: 0 minutes 00:09:18.542 Power Cycles: 0 00:09:18.542 Power On Hours: 0 hours 00:09:18.542 Unsafe Shutdowns: 0 00:09:18.542 Unrecoverable Media Errors: 0 00:09:18.542 Lifetime Error Log Entries: 0 00:09:18.542 Warning Temperature Time: 0 minutes 00:09:18.542 Critical Temperature Time: 0 minutes 00:09:18.542 00:09:18.542 Number of Queues 00:09:18.542 ================ 00:09:18.542 Number of I/O Submission Queues: 64 00:09:18.542 Number of I/O Completion Queues: 64 00:09:18.542 00:09:18.542 ZNS Specific Controller Data 00:09:18.542 ============================ 00:09:18.542 Zone Append Size Limit: 0 00:09:18.542 00:09:18.542 00:09:18.542 Active Namespaces 00:09:18.542 ================= 00:09:18.542 Namespace ID:1 00:09:18.542 Error Recovery Timeout: Unlimited 00:09:18.542 Command Set Identifier: NVM (00h) 00:09:18.542 Deallocate: Supported 00:09:18.542 Deallocated/Unwritten Error: Supported 00:09:18.542 Deallocated Read Value: All 0x00 00:09:18.542 Deallocate in Write Zeroes: Not Supported 00:09:18.542 Deallocated Guard Field: 0xFFFF 00:09:18.542 Flush: Supported 00:09:18.542 Reservation: Not Supported 00:09:18.542 Namespace Sharing Capabilities: Multiple Controllers 00:09:18.542 Size (in LBAs): 262144 (1GiB) 00:09:18.542 Capacity (in LBAs): 262144 (1GiB) 00:09:18.542 Utilization (in LBAs): 262144 (1GiB) 00:09:18.542 Thin Provisioning: Not Supported 00:09:18.542 Per-NS Atomic Units: No 00:09:18.542 Maximum Single Source Range Length: 128 00:09:18.542 Maximum Copy Length: 128 00:09:18.542 Maximum Source Range Count: 128 00:09:18.542 NGUID/EUI64 Never Reused: No 00:09:18.542 Namespace Write Protected: No 00:09:18.542 Endurance group ID: 1 00:09:18.542 Number of LBA Formats: 8 00:09:18.542 Current LBA Format: LBA Format #04 00:09:18.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.542 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.542 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.542 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.542 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.542 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.542 00:09:18.542 Get Feature FDP: 00:09:18.542 ================ 00:09:18.542 Enabled: Yes 00:09:18.542 FDP configuration index: 0 00:09:18.542 00:09:18.542 FDP configurations log page 00:09:18.542 =========================== 00:09:18.542 Number of FDP configurations: 1 00:09:18.542 Version: 0 00:09:18.542 Size: 112 00:09:18.542 FDP Configuration Descriptor: 0 00:09:18.542 Descriptor Size: 96 00:09:18.542 Reclaim Group Identifier format: 2 00:09:18.542 FDP Volatile Write Cache: Not Present 00:09:18.542 FDP Configuration: Valid 00:09:18.542 Vendor Specific Size: 0 00:09:18.542 Number of Reclaim Groups: 2 00:09:18.542 Number of Recalim Unit Handles: 8 00:09:18.542 Max Placement Identifiers: 128 00:09:18.542 Number of Namespaces Suppprted: 256 00:09:18.542 Reclaim unit Nominal Size: 6000000 bytes 00:09:18.542 Estimated Reclaim Unit Time Limit: Not Reported 00:09:18.542 RUH Desc #000: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #001: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #002: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #003: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #004: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #005: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #006: RUH Type: Initially Isolated 00:09:18.542 RUH Desc #007: RUH Type: Initially Isolated 00:09:18.542 00:09:18.542 FDP reclaim unit handle usage log page 00:09:18.542 ====================================== 00:09:18.542 Number of Reclaim Unit Handles: 8 00:09:18.542 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:18.542 RUH Usage Desc #001: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #002: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #003: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #004: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #005: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #006: RUH Attributes: Unused 00:09:18.542 RUH Usage Desc #007: RUH Attributes: Unused 00:09:18.542 00:09:18.542 FDP statistics log page 00:09:18.542 ======================= 00:09:18.542 Host bytes with metadata written: 454008832 00:09:18.542 Medi[2024-11-15 11:21:01.400077] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64279 terminated unexpected 00:09:18.542 a bytes with metadata written: 454053888 00:09:18.542 Media bytes erased: 0 00:09:18.542 00:09:18.542 FDP events log page 00:09:18.542 =================== 00:09:18.542 Number of FDP events: 0 00:09:18.542 00:09:18.542 NVM Specific Namespace Data 00:09:18.542 =========================== 00:09:18.542 Logical Block Storage Tag Mask: 0 00:09:18.542 Protection Information Capabilities: 00:09:18.542 16b Guard Protection Information Storage Tag Support: No 00:09:18.542 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.542 Storage Tag Check Read Support: No 00:09:18.542 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.542 ===================================================== 00:09:18.542 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.542 ===================================================== 00:09:18.542 Controller Capabilities/Features 00:09:18.542 ================================ 00:09:18.542 Vendor ID: 1b36 00:09:18.542 Subsystem Vendor ID: 1af4 00:09:18.542 Serial Number: 12342 00:09:18.542 Model Number: QEMU NVMe Ctrl 00:09:18.542 Firmware Version: 8.0.0 00:09:18.542 Recommended Arb Burst: 6 00:09:18.542 IEEE OUI Identifier: 00 54 52 00:09:18.542 Multi-path I/O 00:09:18.542 May have multiple subsystem ports: No 00:09:18.542 May have multiple controllers: No 00:09:18.542 Associated with SR-IOV VF: No 00:09:18.542 Max Data Transfer Size: 524288 00:09:18.542 Max Number of Namespaces: 256 00:09:18.542 Max Number of I/O Queues: 64 00:09:18.542 NVMe Specification Version (VS): 1.4 00:09:18.542 NVMe Specification Version (Identify): 1.4 00:09:18.542 Maximum Queue Entries: 2048 00:09:18.542 Contiguous Queues Required: Yes 00:09:18.542 Arbitration Mechanisms Supported 00:09:18.542 Weighted Round Robin: Not Supported 00:09:18.542 Vendor Specific: Not Supported 00:09:18.542 Reset Timeout: 7500 ms 00:09:18.542 Doorbell Stride: 4 bytes 00:09:18.542 NVM Subsystem Reset: Not Supported 00:09:18.542 Command Sets Supported 00:09:18.542 NVM Command Set: Supported 00:09:18.542 Boot Partition: Not Supported 00:09:18.542 Memory Page Size Minimum: 4096 bytes 00:09:18.542 Memory Page Size Maximum: 65536 bytes 00:09:18.542 Persistent Memory Region: Not Supported 00:09:18.542 Optional Asynchronous Events Supported 00:09:18.542 Namespace Attribute Notices: Supported 00:09:18.542 Firmware Activation Notices: Not Supported 00:09:18.542 ANA Change Notices: Not Supported 00:09:18.542 PLE Aggregate Log Change Notices: Not Supported 00:09:18.542 LBA Status Info Alert Notices: Not Supported 00:09:18.542 EGE Aggregate Log Change Notices: Not Supported 00:09:18.542 Normal NVM Subsystem Shutdown event: Not Supported 00:09:18.542 Zone Descriptor Change Notices: Not Supported 00:09:18.542 Discovery Log Change Notices: Not Supported 00:09:18.542 Controller Attributes 00:09:18.542 128-bit Host Identifier: Not Supported 00:09:18.542 Non-Operational Permissive Mode: Not Supported 00:09:18.542 NVM Sets: Not Supported 00:09:18.542 Read Recovery Levels: Not Supported 00:09:18.542 Endurance Groups: Not Supported 00:09:18.542 Predictable Latency Mode: Not Supported 00:09:18.542 Traffic Based Keep ALive: Not Supported 00:09:18.542 Namespace Granularity: Not Supported 00:09:18.542 SQ Associations: Not Supported 00:09:18.542 UUID List: Not Supported 00:09:18.542 Multi-Domain Subsystem: Not Supported 00:09:18.542 Fixed Capacity Management: Not Supported 00:09:18.542 Variable Capacity Management: Not Supported 00:09:18.542 Delete Endurance Group: Not Supported 00:09:18.542 Delete NVM Set: Not Supported 00:09:18.542 Extended LBA Formats Supported: Supported 00:09:18.542 Flexible Data Placement Supported: Not Supported 00:09:18.542 00:09:18.542 Controller Memory Buffer Support 00:09:18.542 ================================ 00:09:18.542 Supported: No 00:09:18.542 00:09:18.542 Persistent Memory Region Support 00:09:18.542 ================================ 00:09:18.542 Supported: No 00:09:18.542 00:09:18.542 Admin Command Set Attributes 00:09:18.542 ============================ 00:09:18.542 Security Send/Receive: Not Supported 00:09:18.542 Format NVM: Supported 00:09:18.542 Firmware Activate/Download: Not Supported 00:09:18.542 Namespace Management: Supported 00:09:18.542 Device Self-Test: Not Supported 00:09:18.542 Directives: Supported 00:09:18.542 NVMe-MI: Not Supported 00:09:18.542 Virtualization Management: Not Supported 00:09:18.542 Doorbell Buffer Config: Supported 00:09:18.543 Get LBA Status Capability: Not Supported 00:09:18.543 Command & Feature Lockdown Capability: Not Supported 00:09:18.543 Abort Command Limit: 4 00:09:18.543 Async Event Request Limit: 4 00:09:18.543 Number of Firmware Slots: N/A 00:09:18.543 Firmware Slot 1 Read-Only: N/A 00:09:18.543 Firmware Activation Without Reset: N/A 00:09:18.543 Multiple Update Detection Support: N/A 00:09:18.543 Firmware Update Granularity: No Information Provided 00:09:18.543 Per-Namespace SMART Log: Yes 00:09:18.543 Asymmetric Namespace Access Log Page: Not Supported 00:09:18.543 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:18.543 Command Effects Log Page: Supported 00:09:18.543 Get Log Page Extended Data: Supported 00:09:18.543 Telemetry Log Pages: Not Supported 00:09:18.543 Persistent Event Log Pages: Not Supported 00:09:18.543 Supported Log Pages Log Page: May Support 00:09:18.543 Commands Supported & Effects Log Page: Not Supported 00:09:18.543 Feature Identifiers & Effects Log Page:May Support 00:09:18.543 NVMe-MI Commands & Effects Log Page: May Support 00:09:18.543 Data Area 4 for Telemetry Log: Not Supported 00:09:18.543 Error Log Page Entries Supported: 1 00:09:18.543 Keep Alive: Not Supported 00:09:18.543 00:09:18.543 NVM Command Set Attributes 00:09:18.543 ========================== 00:09:18.543 Submission Queue Entry Size 00:09:18.543 Max: 64 00:09:18.543 Min: 64 00:09:18.543 Completion Queue Entry Size 00:09:18.543 Max: 16 00:09:18.543 Min: 16 00:09:18.543 Number of Namespaces: 256 00:09:18.543 Compare Command: Supported 00:09:18.543 Write Uncorrectable Command: Not Supported 00:09:18.543 Dataset Management Command: Supported 00:09:18.543 Write Zeroes Command: Supported 00:09:18.543 Set Features Save Field: Supported 00:09:18.543 Reservations: Not Supported 00:09:18.543 Timestamp: Supported 00:09:18.543 Copy: Supported 00:09:18.543 Volatile Write Cache: Present 00:09:18.543 Atomic Write Unit (Normal): 1 00:09:18.543 Atomic Write Unit (PFail): 1 00:09:18.543 Atomic Compare & Write Unit: 1 00:09:18.543 Fused Compare & Write: Not Supported 00:09:18.543 Scatter-Gather List 00:09:18.543 SGL Command Set: Supported 00:09:18.543 SGL Keyed: Not Supported 00:09:18.543 SGL Bit Bucket Descriptor: Not Supported 00:09:18.543 SGL Metadata Pointer: Not Supported 00:09:18.543 Oversized SGL: Not Supported 00:09:18.543 SGL Metadata Address: Not Supported 00:09:18.543 SGL Offset: Not Supported 00:09:18.543 Transport SGL Data Block: Not Supported 00:09:18.543 Replay Protected Memory Block: Not Supported 00:09:18.543 00:09:18.543 Firmware Slot Information 00:09:18.543 ========================= 00:09:18.543 Active slot: 1 00:09:18.543 Slot 1 Firmware Revision: 1.0 00:09:18.543 00:09:18.543 00:09:18.543 Commands Supported and Effects 00:09:18.543 ============================== 00:09:18.543 Admin Commands 00:09:18.543 -------------- 00:09:18.543 Delete I/O Submission Queue (00h): Supported 00:09:18.543 Create I/O Submission Queue (01h): Supported 00:09:18.543 Get Log Page (02h): Supported 00:09:18.543 Delete I/O Completion Queue (04h): Supported 00:09:18.543 Create I/O Completion Queue (05h): Supported 00:09:18.543 Identify (06h): Supported 00:09:18.543 Abort (08h): Supported 00:09:18.543 Set Features (09h): Supported 00:09:18.543 Get Features (0Ah): Supported 00:09:18.543 Asynchronous Event Request (0Ch): Supported 00:09:18.543 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:18.543 Directive Send (19h): Supported 00:09:18.543 Directive Receive (1Ah): Supported 00:09:18.543 Virtualization Management (1Ch): Supported 00:09:18.543 Doorbell Buffer Config (7Ch): Supported 00:09:18.543 Format NVM (80h): Supported LBA-Change 00:09:18.543 I/O Commands 00:09:18.543 ------------ 00:09:18.543 Flush (00h): Supported LBA-Change 00:09:18.543 Write (01h): Supported LBA-Change 00:09:18.543 Read (02h): Supported 00:09:18.543 Compare (05h): Supported 00:09:18.543 Write Zeroes (08h): Supported LBA-Change 00:09:18.543 Dataset Management (09h): Supported LBA-Change 00:09:18.543 Unknown (0Ch): Supported 00:09:18.543 Unknown (12h): Supported 00:09:18.543 Copy (19h): Supported LBA-Change 00:09:18.543 Unknown (1Dh): Supported LBA-Change 00:09:18.543 00:09:18.543 Error Log 00:09:18.543 ========= 00:09:18.543 00:09:18.543 Arbitration 00:09:18.543 =========== 00:09:18.543 Arbitration Burst: no limit 00:09:18.543 00:09:18.543 Power Management 00:09:18.543 ================ 00:09:18.543 Number of Power States: 1 00:09:18.543 Current Power State: Power State #0 00:09:18.543 Power State #0: 00:09:18.543 Max Power: 25.00 W 00:09:18.543 Non-Operational State: Operational 00:09:18.543 Entry Latency: 16 microseconds 00:09:18.543 Exit Latency: 4 microseconds 00:09:18.543 Relative Read Throughput: 0 00:09:18.543 Relative Read Latency: 0 00:09:18.543 Relative Write Throughput: 0 00:09:18.543 Relative Write Latency: 0 00:09:18.543 Idle Power: Not Reported 00:09:18.543 Active Power: Not Reported 00:09:18.543 Non-Operational Permissive Mode: Not Supported 00:09:18.543 00:09:18.543 Health Information 00:09:18.543 ================== 00:09:18.543 Critical Warnings: 00:09:18.543 Available Spare Space: OK 00:09:18.543 Temperature: OK 00:09:18.543 Device Reliability: OK 00:09:18.543 Read Only: No 00:09:18.543 Volatile Memory Backup: OK 00:09:18.543 Current Temperature: 323 Kelvin (50 Celsius) 00:09:18.543 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:18.543 Available Spare: 0% 00:09:18.543 Available Spare Threshold: 0% 00:09:18.543 Life Percentage Used: 0% 00:09:18.543 Data Units Read: 2137 00:09:18.543 Data Units Written: 1925 00:09:18.543 Host Read Commands: 101718 00:09:18.543 Host Write Commands: 99987 00:09:18.543 Controller Busy Time: 0 minutes 00:09:18.543 Power Cycles: 0 00:09:18.543 Power On Hours: 0 hours 00:09:18.543 Unsafe Shutdowns: 0 00:09:18.543 Unrecoverable Media Errors: 0 00:09:18.543 Lifetime Error Log Entries: 0 00:09:18.543 Warning Temperature Time: 0 minutes 00:09:18.543 Critical Temperature Time: 0 minutes 00:09:18.543 00:09:18.543 Number of Queues 00:09:18.543 ================ 00:09:18.543 Number of I/O Submission Queues: 64 00:09:18.543 Number of I/O Completion Queues: 64 00:09:18.543 00:09:18.543 ZNS Specific Controller Data 00:09:18.543 ============================ 00:09:18.543 Zone Append Size Limit: 0 00:09:18.543 00:09:18.543 00:09:18.543 Active Namespaces 00:09:18.543 ================= 00:09:18.543 Namespace ID:1 00:09:18.543 Error Recovery Timeout: Unlimited 00:09:18.543 Command Set Identifier: NVM (00h) 00:09:18.543 Deallocate: Supported 00:09:18.543 Deallocated/Unwritten Error: Supported 00:09:18.543 Deallocated Read Value: All 0x00 00:09:18.543 Deallocate in Write Zeroes: Not Supported 00:09:18.543 Deallocated Guard Field: 0xFFFF 00:09:18.543 Flush: Supported 00:09:18.543 Reservation: Not Supported 00:09:18.543 Namespace Sharing Capabilities: Private 00:09:18.543 Size (in LBAs): 1048576 (4GiB) 00:09:18.543 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.543 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.543 Thin Provisioning: Not Supported 00:09:18.543 Per-NS Atomic Units: No 00:09:18.543 Maximum Single Source Range Length: 128 00:09:18.543 Maximum Copy Length: 128 00:09:18.543 Maximum Source Range Count: 128 00:09:18.543 NGUID/EUI64 Never Reused: No 00:09:18.543 Namespace Write Protected: No 00:09:18.543 Number of LBA Formats: 8 00:09:18.543 Current LBA Format: LBA Format #04 00:09:18.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.543 00:09:18.543 NVM Specific Namespace Data 00:09:18.543 =========================== 00:09:18.543 Logical Block Storage Tag Mask: 0 00:09:18.543 Protection Information Capabilities: 00:09:18.543 16b Guard Protection Information Storage Tag Support: No 00:09:18.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.543 Storage Tag Check Read Support: No 00:09:18.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Namespace ID:2 00:09:18.543 Error Recovery Timeout: Unlimited 00:09:18.543 Command Set Identifier: NVM (00h) 00:09:18.543 Deallocate: Supported 00:09:18.543 Deallocated/Unwritten Error: Supported 00:09:18.543 Deallocated Read Value: All 0x00 00:09:18.543 Deallocate in Write Zeroes: Not Supported 00:09:18.543 Deallocated Guard Field: 0xFFFF 00:09:18.543 Flush: Supported 00:09:18.543 Reservation: Not Supported 00:09:18.543 Namespace Sharing Capabilities: Private 00:09:18.543 Size (in LBAs): 1048576 (4GiB) 00:09:18.543 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.543 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.543 Thin Provisioning: Not Supported 00:09:18.543 Per-NS Atomic Units: No 00:09:18.543 Maximum Single Source Range Length: 128 00:09:18.543 Maximum Copy Length: 128 00:09:18.543 Maximum Source Range Count: 128 00:09:18.543 NGUID/EUI64 Never Reused: No 00:09:18.543 Namespace Write Protected: No 00:09:18.543 Number of LBA Formats: 8 00:09:18.543 Current LBA Format: LBA Format #04 00:09:18.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.543 00:09:18.543 NVM Specific Namespace Data 00:09:18.543 =========================== 00:09:18.543 Logical Block Storage Tag Mask: 0 00:09:18.543 Protection Information Capabilities: 00:09:18.543 16b Guard Protection Information Storage Tag Support: No 00:09:18.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.543 Storage Tag Check Read Support: No 00:09:18.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Namespace ID:3 00:09:18.543 Error Recovery Timeout: Unlimited 00:09:18.543 Command Set Identifier: NVM (00h) 00:09:18.543 Deallocate: Supported 00:09:18.543 Deallocated/Unwritten Error: Supported 00:09:18.543 Deallocated Read Value: All 0x00 00:09:18.543 Deallocate in Write Zeroes: Not Supported 00:09:18.543 Deallocated Guard Field: 0xFFFF 00:09:18.543 Flush: Supported 00:09:18.543 Reservation: Not Supported 00:09:18.543 Namespace Sharing Capabilities: Private 00:09:18.543 Size (in LBAs): 1048576 (4GiB) 00:09:18.543 Capacity (in LBAs): 1048576 (4GiB) 00:09:18.543 Utilization (in LBAs): 1048576 (4GiB) 00:09:18.543 Thin Provisioning: Not Supported 00:09:18.543 Per-NS Atomic Units: No 00:09:18.543 Maximum Single Source Range Length: 128 00:09:18.543 Maximum Copy Length: 128 00:09:18.543 Maximum Source Range Count: 128 00:09:18.543 NGUID/EUI64 Never Reused: No 00:09:18.543 Namespace Write Protected: No 00:09:18.543 Number of LBA Formats: 8 00:09:18.543 Current LBA Format: LBA Format #04 00:09:18.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:18.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:18.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:18.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:18.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:18.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:18.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:18.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:18.543 00:09:18.543 NVM Specific Namespace Data 00:09:18.543 =========================== 00:09:18.543 Logical Block Storage Tag Mask: 0 00:09:18.543 Protection Information Capabilities: 00:09:18.543 16b Guard Protection Information Storage Tag Support: No 00:09:18.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:18.543 Storage Tag Check Read Support: No 00:09:18.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:18.543 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:18.543 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:19.110 ===================================================== 00:09:19.110 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:19.110 ===================================================== 00:09:19.110 Controller Capabilities/Features 00:09:19.110 ================================ 00:09:19.110 Vendor ID: 1b36 00:09:19.110 Subsystem Vendor ID: 1af4 00:09:19.110 Serial Number: 12340 00:09:19.110 Model Number: QEMU NVMe Ctrl 00:09:19.110 Firmware Version: 8.0.0 00:09:19.110 Recommended Arb Burst: 6 00:09:19.110 IEEE OUI Identifier: 00 54 52 00:09:19.110 Multi-path I/O 00:09:19.110 May have multiple subsystem ports: No 00:09:19.110 May have multiple controllers: No 00:09:19.110 Associated with SR-IOV VF: No 00:09:19.110 Max Data Transfer Size: 524288 00:09:19.110 Max Number of Namespaces: 256 00:09:19.110 Max Number of I/O Queues: 64 00:09:19.110 NVMe Specification Version (VS): 1.4 00:09:19.110 NVMe Specification Version (Identify): 1.4 00:09:19.110 Maximum Queue Entries: 2048 00:09:19.110 Contiguous Queues Required: Yes 00:09:19.110 Arbitration Mechanisms Supported 00:09:19.110 Weighted Round Robin: Not Supported 00:09:19.110 Vendor Specific: Not Supported 00:09:19.110 Reset Timeout: 7500 ms 00:09:19.110 Doorbell Stride: 4 bytes 00:09:19.110 NVM Subsystem Reset: Not Supported 00:09:19.110 Command Sets Supported 00:09:19.110 NVM Command Set: Supported 00:09:19.110 Boot Partition: Not Supported 00:09:19.110 Memory Page Size Minimum: 4096 bytes 00:09:19.110 Memory Page Size Maximum: 65536 bytes 00:09:19.110 Persistent Memory Region: Not Supported 00:09:19.110 Optional Asynchronous Events Supported 00:09:19.110 Namespace Attribute Notices: Supported 00:09:19.110 Firmware Activation Notices: Not Supported 00:09:19.110 ANA Change Notices: Not Supported 00:09:19.110 PLE Aggregate Log Change Notices: Not Supported 00:09:19.110 LBA Status Info Alert Notices: Not Supported 00:09:19.110 EGE Aggregate Log Change Notices: Not Supported 00:09:19.110 Normal NVM Subsystem Shutdown event: Not Supported 00:09:19.110 Zone Descriptor Change Notices: Not Supported 00:09:19.110 Discovery Log Change Notices: Not Supported 00:09:19.110 Controller Attributes 00:09:19.110 128-bit Host Identifier: Not Supported 00:09:19.110 Non-Operational Permissive Mode: Not Supported 00:09:19.110 NVM Sets: Not Supported 00:09:19.110 Read Recovery Levels: Not Supported 00:09:19.110 Endurance Groups: Not Supported 00:09:19.110 Predictable Latency Mode: Not Supported 00:09:19.110 Traffic Based Keep ALive: Not Supported 00:09:19.110 Namespace Granularity: Not Supported 00:09:19.110 SQ Associations: Not Supported 00:09:19.110 UUID List: Not Supported 00:09:19.110 Multi-Domain Subsystem: Not Supported 00:09:19.110 Fixed Capacity Management: Not Supported 00:09:19.110 Variable Capacity Management: Not Supported 00:09:19.110 Delete Endurance Group: Not Supported 00:09:19.110 Delete NVM Set: Not Supported 00:09:19.110 Extended LBA Formats Supported: Supported 00:09:19.110 Flexible Data Placement Supported: Not Supported 00:09:19.110 00:09:19.110 Controller Memory Buffer Support 00:09:19.110 ================================ 00:09:19.110 Supported: No 00:09:19.110 00:09:19.110 Persistent Memory Region Support 00:09:19.110 ================================ 00:09:19.110 Supported: No 00:09:19.110 00:09:19.110 Admin Command Set Attributes 00:09:19.110 ============================ 00:09:19.110 Security Send/Receive: Not Supported 00:09:19.110 Format NVM: Supported 00:09:19.110 Firmware Activate/Download: Not Supported 00:09:19.110 Namespace Management: Supported 00:09:19.110 Device Self-Test: Not Supported 00:09:19.110 Directives: Supported 00:09:19.110 NVMe-MI: Not Supported 00:09:19.110 Virtualization Management: Not Supported 00:09:19.110 Doorbell Buffer Config: Supported 00:09:19.110 Get LBA Status Capability: Not Supported 00:09:19.110 Command & Feature Lockdown Capability: Not Supported 00:09:19.110 Abort Command Limit: 4 00:09:19.110 Async Event Request Limit: 4 00:09:19.110 Number of Firmware Slots: N/A 00:09:19.110 Firmware Slot 1 Read-Only: N/A 00:09:19.110 Firmware Activation Without Reset: N/A 00:09:19.110 Multiple Update Detection Support: N/A 00:09:19.110 Firmware Update Granularity: No Information Provided 00:09:19.110 Per-Namespace SMART Log: Yes 00:09:19.110 Asymmetric Namespace Access Log Page: Not Supported 00:09:19.110 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:19.110 Command Effects Log Page: Supported 00:09:19.110 Get Log Page Extended Data: Supported 00:09:19.110 Telemetry Log Pages: Not Supported 00:09:19.110 Persistent Event Log Pages: Not Supported 00:09:19.110 Supported Log Pages Log Page: May Support 00:09:19.110 Commands Supported & Effects Log Page: Not Supported 00:09:19.110 Feature Identifiers & Effects Log Page:May Support 00:09:19.110 NVMe-MI Commands & Effects Log Page: May Support 00:09:19.110 Data Area 4 for Telemetry Log: Not Supported 00:09:19.110 Error Log Page Entries Supported: 1 00:09:19.110 Keep Alive: Not Supported 00:09:19.110 00:09:19.110 NVM Command Set Attributes 00:09:19.110 ========================== 00:09:19.110 Submission Queue Entry Size 00:09:19.110 Max: 64 00:09:19.110 Min: 64 00:09:19.110 Completion Queue Entry Size 00:09:19.110 Max: 16 00:09:19.110 Min: 16 00:09:19.110 Number of Namespaces: 256 00:09:19.110 Compare Command: Supported 00:09:19.110 Write Uncorrectable Command: Not Supported 00:09:19.110 Dataset Management Command: Supported 00:09:19.110 Write Zeroes Command: Supported 00:09:19.110 Set Features Save Field: Supported 00:09:19.110 Reservations: Not Supported 00:09:19.110 Timestamp: Supported 00:09:19.110 Copy: Supported 00:09:19.110 Volatile Write Cache: Present 00:09:19.110 Atomic Write Unit (Normal): 1 00:09:19.110 Atomic Write Unit (PFail): 1 00:09:19.110 Atomic Compare & Write Unit: 1 00:09:19.110 Fused Compare & Write: Not Supported 00:09:19.110 Scatter-Gather List 00:09:19.110 SGL Command Set: Supported 00:09:19.110 SGL Keyed: Not Supported 00:09:19.110 SGL Bit Bucket Descriptor: Not Supported 00:09:19.110 SGL Metadata Pointer: Not Supported 00:09:19.110 Oversized SGL: Not Supported 00:09:19.110 SGL Metadata Address: Not Supported 00:09:19.110 SGL Offset: Not Supported 00:09:19.110 Transport SGL Data Block: Not Supported 00:09:19.110 Replay Protected Memory Block: Not Supported 00:09:19.110 00:09:19.110 Firmware Slot Information 00:09:19.110 ========================= 00:09:19.110 Active slot: 1 00:09:19.110 Slot 1 Firmware Revision: 1.0 00:09:19.110 00:09:19.110 00:09:19.110 Commands Supported and Effects 00:09:19.110 ============================== 00:09:19.110 Admin Commands 00:09:19.110 -------------- 00:09:19.110 Delete I/O Submission Queue (00h): Supported 00:09:19.110 Create I/O Submission Queue (01h): Supported 00:09:19.110 Get Log Page (02h): Supported 00:09:19.110 Delete I/O Completion Queue (04h): Supported 00:09:19.110 Create I/O Completion Queue (05h): Supported 00:09:19.110 Identify (06h): Supported 00:09:19.110 Abort (08h): Supported 00:09:19.110 Set Features (09h): Supported 00:09:19.111 Get Features (0Ah): Supported 00:09:19.111 Asynchronous Event Request (0Ch): Supported 00:09:19.111 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:19.111 Directive Send (19h): Supported 00:09:19.111 Directive Receive (1Ah): Supported 00:09:19.111 Virtualization Management (1Ch): Supported 00:09:19.111 Doorbell Buffer Config (7Ch): Supported 00:09:19.111 Format NVM (80h): Supported LBA-Change 00:09:19.111 I/O Commands 00:09:19.111 ------------ 00:09:19.111 Flush (00h): Supported LBA-Change 00:09:19.111 Write (01h): Supported LBA-Change 00:09:19.111 Read (02h): Supported 00:09:19.111 Compare (05h): Supported 00:09:19.111 Write Zeroes (08h): Supported LBA-Change 00:09:19.111 Dataset Management (09h): Supported LBA-Change 00:09:19.111 Unknown (0Ch): Supported 00:09:19.111 Unknown (12h): Supported 00:09:19.111 Copy (19h): Supported LBA-Change 00:09:19.111 Unknown (1Dh): Supported LBA-Change 00:09:19.111 00:09:19.111 Error Log 00:09:19.111 ========= 00:09:19.111 00:09:19.111 Arbitration 00:09:19.111 =========== 00:09:19.111 Arbitration Burst: no limit 00:09:19.111 00:09:19.111 Power Management 00:09:19.111 ================ 00:09:19.111 Number of Power States: 1 00:09:19.111 Current Power State: Power State #0 00:09:19.111 Power State #0: 00:09:19.111 Max Power: 25.00 W 00:09:19.111 Non-Operational State: Operational 00:09:19.111 Entry Latency: 16 microseconds 00:09:19.111 Exit Latency: 4 microseconds 00:09:19.111 Relative Read Throughput: 0 00:09:19.111 Relative Read Latency: 0 00:09:19.111 Relative Write Throughput: 0 00:09:19.111 Relative Write Latency: 0 00:09:19.111 Idle Power: Not Reported 00:09:19.111 Active Power: Not Reported 00:09:19.111 Non-Operational Permissive Mode: Not Supported 00:09:19.111 00:09:19.111 Health Information 00:09:19.111 ================== 00:09:19.111 Critical Warnings: 00:09:19.111 Available Spare Space: OK 00:09:19.111 Temperature: OK 00:09:19.111 Device Reliability: OK 00:09:19.111 Read Only: No 00:09:19.111 Volatile Memory Backup: OK 00:09:19.111 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.111 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.111 Available Spare: 0% 00:09:19.111 Available Spare Threshold: 0% 00:09:19.111 Life Percentage Used: 0% 00:09:19.111 Data Units Read: 635 00:09:19.111 Data Units Written: 563 00:09:19.111 Host Read Commands: 33116 00:09:19.111 Host Write Commands: 32902 00:09:19.111 Controller Busy Time: 0 minutes 00:09:19.111 Power Cycles: 0 00:09:19.111 Power On Hours: 0 hours 00:09:19.111 Unsafe Shutdowns: 0 00:09:19.111 Unrecoverable Media Errors: 0 00:09:19.111 Lifetime Error Log Entries: 0 00:09:19.111 Warning Temperature Time: 0 minutes 00:09:19.111 Critical Temperature Time: 0 minutes 00:09:19.111 00:09:19.111 Number of Queues 00:09:19.111 ================ 00:09:19.111 Number of I/O Submission Queues: 64 00:09:19.111 Number of I/O Completion Queues: 64 00:09:19.111 00:09:19.111 ZNS Specific Controller Data 00:09:19.111 ============================ 00:09:19.111 Zone Append Size Limit: 0 00:09:19.111 00:09:19.111 00:09:19.111 Active Namespaces 00:09:19.111 ================= 00:09:19.111 Namespace ID:1 00:09:19.111 Error Recovery Timeout: Unlimited 00:09:19.111 Command Set Identifier: NVM (00h) 00:09:19.111 Deallocate: Supported 00:09:19.111 Deallocated/Unwritten Error: Supported 00:09:19.111 Deallocated Read Value: All 0x00 00:09:19.111 Deallocate in Write Zeroes: Not Supported 00:09:19.111 Deallocated Guard Field: 0xFFFF 00:09:19.111 Flush: Supported 00:09:19.111 Reservation: Not Supported 00:09:19.111 Metadata Transferred as: Separate Metadata Buffer 00:09:19.111 Namespace Sharing Capabilities: Private 00:09:19.111 Size (in LBAs): 1548666 (5GiB) 00:09:19.111 Capacity (in LBAs): 1548666 (5GiB) 00:09:19.111 Utilization (in LBAs): 1548666 (5GiB) 00:09:19.111 Thin Provisioning: Not Supported 00:09:19.111 Per-NS Atomic Units: No 00:09:19.111 Maximum Single Source Range Length: 128 00:09:19.111 Maximum Copy Length: 128 00:09:19.111 Maximum Source Range Count: 128 00:09:19.111 NGUID/EUI64 Never Reused: No 00:09:19.111 Namespace Write Protected: No 00:09:19.111 Number of LBA Formats: 8 00:09:19.111 Current LBA Format: LBA Format #07 00:09:19.111 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.111 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.111 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.111 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.111 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.111 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.111 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.111 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.111 00:09:19.111 NVM Specific Namespace Data 00:09:19.111 =========================== 00:09:19.111 Logical Block Storage Tag Mask: 0 00:09:19.111 Protection Information Capabilities: 00:09:19.111 16b Guard Protection Information Storage Tag Support: No 00:09:19.111 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.111 Storage Tag Check Read Support: No 00:09:19.111 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.111 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:19.111 11:21:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:19.370 ===================================================== 00:09:19.370 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:19.370 ===================================================== 00:09:19.370 Controller Capabilities/Features 00:09:19.370 ================================ 00:09:19.370 Vendor ID: 1b36 00:09:19.370 Subsystem Vendor ID: 1af4 00:09:19.370 Serial Number: 12341 00:09:19.370 Model Number: QEMU NVMe Ctrl 00:09:19.370 Firmware Version: 8.0.0 00:09:19.370 Recommended Arb Burst: 6 00:09:19.370 IEEE OUI Identifier: 00 54 52 00:09:19.370 Multi-path I/O 00:09:19.370 May have multiple subsystem ports: No 00:09:19.370 May have multiple controllers: No 00:09:19.370 Associated with SR-IOV VF: No 00:09:19.370 Max Data Transfer Size: 524288 00:09:19.370 Max Number of Namespaces: 256 00:09:19.370 Max Number of I/O Queues: 64 00:09:19.370 NVMe Specification Version (VS): 1.4 00:09:19.370 NVMe Specification Version (Identify): 1.4 00:09:19.370 Maximum Queue Entries: 2048 00:09:19.370 Contiguous Queues Required: Yes 00:09:19.370 Arbitration Mechanisms Supported 00:09:19.370 Weighted Round Robin: Not Supported 00:09:19.370 Vendor Specific: Not Supported 00:09:19.370 Reset Timeout: 7500 ms 00:09:19.370 Doorbell Stride: 4 bytes 00:09:19.370 NVM Subsystem Reset: Not Supported 00:09:19.370 Command Sets Supported 00:09:19.370 NVM Command Set: Supported 00:09:19.370 Boot Partition: Not Supported 00:09:19.370 Memory Page Size Minimum: 4096 bytes 00:09:19.370 Memory Page Size Maximum: 65536 bytes 00:09:19.370 Persistent Memory Region: Not Supported 00:09:19.370 Optional Asynchronous Events Supported 00:09:19.370 Namespace Attribute Notices: Supported 00:09:19.370 Firmware Activation Notices: Not Supported 00:09:19.370 ANA Change Notices: Not Supported 00:09:19.370 PLE Aggregate Log Change Notices: Not Supported 00:09:19.370 LBA Status Info Alert Notices: Not Supported 00:09:19.370 EGE Aggregate Log Change Notices: Not Supported 00:09:19.370 Normal NVM Subsystem Shutdown event: Not Supported 00:09:19.370 Zone Descriptor Change Notices: Not Supported 00:09:19.370 Discovery Log Change Notices: Not Supported 00:09:19.370 Controller Attributes 00:09:19.370 128-bit Host Identifier: Not Supported 00:09:19.370 Non-Operational Permissive Mode: Not Supported 00:09:19.370 NVM Sets: Not Supported 00:09:19.370 Read Recovery Levels: Not Supported 00:09:19.370 Endurance Groups: Not Supported 00:09:19.370 Predictable Latency Mode: Not Supported 00:09:19.370 Traffic Based Keep ALive: Not Supported 00:09:19.370 Namespace Granularity: Not Supported 00:09:19.370 SQ Associations: Not Supported 00:09:19.370 UUID List: Not Supported 00:09:19.370 Multi-Domain Subsystem: Not Supported 00:09:19.370 Fixed Capacity Management: Not Supported 00:09:19.370 Variable Capacity Management: Not Supported 00:09:19.370 Delete Endurance Group: Not Supported 00:09:19.370 Delete NVM Set: Not Supported 00:09:19.370 Extended LBA Formats Supported: Supported 00:09:19.370 Flexible Data Placement Supported: Not Supported 00:09:19.370 00:09:19.370 Controller Memory Buffer Support 00:09:19.370 ================================ 00:09:19.370 Supported: No 00:09:19.370 00:09:19.370 Persistent Memory Region Support 00:09:19.370 ================================ 00:09:19.370 Supported: No 00:09:19.370 00:09:19.370 Admin Command Set Attributes 00:09:19.370 ============================ 00:09:19.371 Security Send/Receive: Not Supported 00:09:19.371 Format NVM: Supported 00:09:19.371 Firmware Activate/Download: Not Supported 00:09:19.371 Namespace Management: Supported 00:09:19.371 Device Self-Test: Not Supported 00:09:19.371 Directives: Supported 00:09:19.371 NVMe-MI: Not Supported 00:09:19.371 Virtualization Management: Not Supported 00:09:19.371 Doorbell Buffer Config: Supported 00:09:19.371 Get LBA Status Capability: Not Supported 00:09:19.371 Command & Feature Lockdown Capability: Not Supported 00:09:19.371 Abort Command Limit: 4 00:09:19.371 Async Event Request Limit: 4 00:09:19.371 Number of Firmware Slots: N/A 00:09:19.371 Firmware Slot 1 Read-Only: N/A 00:09:19.371 Firmware Activation Without Reset: N/A 00:09:19.371 Multiple Update Detection Support: N/A 00:09:19.371 Firmware Update Granularity: No Information Provided 00:09:19.371 Per-Namespace SMART Log: Yes 00:09:19.371 Asymmetric Namespace Access Log Page: Not Supported 00:09:19.371 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:19.371 Command Effects Log Page: Supported 00:09:19.371 Get Log Page Extended Data: Supported 00:09:19.371 Telemetry Log Pages: Not Supported 00:09:19.371 Persistent Event Log Pages: Not Supported 00:09:19.371 Supported Log Pages Log Page: May Support 00:09:19.371 Commands Supported & Effects Log Page: Not Supported 00:09:19.371 Feature Identifiers & Effects Log Page:May Support 00:09:19.371 NVMe-MI Commands & Effects Log Page: May Support 00:09:19.371 Data Area 4 for Telemetry Log: Not Supported 00:09:19.371 Error Log Page Entries Supported: 1 00:09:19.371 Keep Alive: Not Supported 00:09:19.371 00:09:19.371 NVM Command Set Attributes 00:09:19.371 ========================== 00:09:19.371 Submission Queue Entry Size 00:09:19.371 Max: 64 00:09:19.371 Min: 64 00:09:19.371 Completion Queue Entry Size 00:09:19.371 Max: 16 00:09:19.371 Min: 16 00:09:19.371 Number of Namespaces: 256 00:09:19.371 Compare Command: Supported 00:09:19.371 Write Uncorrectable Command: Not Supported 00:09:19.371 Dataset Management Command: Supported 00:09:19.371 Write Zeroes Command: Supported 00:09:19.371 Set Features Save Field: Supported 00:09:19.371 Reservations: Not Supported 00:09:19.371 Timestamp: Supported 00:09:19.371 Copy: Supported 00:09:19.371 Volatile Write Cache: Present 00:09:19.371 Atomic Write Unit (Normal): 1 00:09:19.371 Atomic Write Unit (PFail): 1 00:09:19.371 Atomic Compare & Write Unit: 1 00:09:19.371 Fused Compare & Write: Not Supported 00:09:19.371 Scatter-Gather List 00:09:19.371 SGL Command Set: Supported 00:09:19.371 SGL Keyed: Not Supported 00:09:19.371 SGL Bit Bucket Descriptor: Not Supported 00:09:19.371 SGL Metadata Pointer: Not Supported 00:09:19.371 Oversized SGL: Not Supported 00:09:19.371 SGL Metadata Address: Not Supported 00:09:19.371 SGL Offset: Not Supported 00:09:19.371 Transport SGL Data Block: Not Supported 00:09:19.371 Replay Protected Memory Block: Not Supported 00:09:19.371 00:09:19.371 Firmware Slot Information 00:09:19.371 ========================= 00:09:19.371 Active slot: 1 00:09:19.371 Slot 1 Firmware Revision: 1.0 00:09:19.371 00:09:19.371 00:09:19.371 Commands Supported and Effects 00:09:19.371 ============================== 00:09:19.371 Admin Commands 00:09:19.371 -------------- 00:09:19.371 Delete I/O Submission Queue (00h): Supported 00:09:19.371 Create I/O Submission Queue (01h): Supported 00:09:19.371 Get Log Page (02h): Supported 00:09:19.371 Delete I/O Completion Queue (04h): Supported 00:09:19.371 Create I/O Completion Queue (05h): Supported 00:09:19.371 Identify (06h): Supported 00:09:19.371 Abort (08h): Supported 00:09:19.371 Set Features (09h): Supported 00:09:19.371 Get Features (0Ah): Supported 00:09:19.371 Asynchronous Event Request (0Ch): Supported 00:09:19.371 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:19.371 Directive Send (19h): Supported 00:09:19.371 Directive Receive (1Ah): Supported 00:09:19.371 Virtualization Management (1Ch): Supported 00:09:19.371 Doorbell Buffer Config (7Ch): Supported 00:09:19.371 Format NVM (80h): Supported LBA-Change 00:09:19.371 I/O Commands 00:09:19.371 ------------ 00:09:19.371 Flush (00h): Supported LBA-Change 00:09:19.371 Write (01h): Supported LBA-Change 00:09:19.371 Read (02h): Supported 00:09:19.371 Compare (05h): Supported 00:09:19.371 Write Zeroes (08h): Supported LBA-Change 00:09:19.371 Dataset Management (09h): Supported LBA-Change 00:09:19.371 Unknown (0Ch): Supported 00:09:19.371 Unknown (12h): Supported 00:09:19.371 Copy (19h): Supported LBA-Change 00:09:19.371 Unknown (1Dh): Supported LBA-Change 00:09:19.371 00:09:19.371 Error Log 00:09:19.371 ========= 00:09:19.371 00:09:19.371 Arbitration 00:09:19.371 =========== 00:09:19.371 Arbitration Burst: no limit 00:09:19.371 00:09:19.371 Power Management 00:09:19.371 ================ 00:09:19.371 Number of Power States: 1 00:09:19.371 Current Power State: Power State #0 00:09:19.371 Power State #0: 00:09:19.371 Max Power: 25.00 W 00:09:19.371 Non-Operational State: Operational 00:09:19.371 Entry Latency: 16 microseconds 00:09:19.371 Exit Latency: 4 microseconds 00:09:19.371 Relative Read Throughput: 0 00:09:19.371 Relative Read Latency: 0 00:09:19.371 Relative Write Throughput: 0 00:09:19.371 Relative Write Latency: 0 00:09:19.371 Idle Power: Not Reported 00:09:19.371 Active Power: Not Reported 00:09:19.371 Non-Operational Permissive Mode: Not Supported 00:09:19.371 00:09:19.371 Health Information 00:09:19.371 ================== 00:09:19.371 Critical Warnings: 00:09:19.371 Available Spare Space: OK 00:09:19.371 Temperature: OK 00:09:19.371 Device Reliability: OK 00:09:19.371 Read Only: No 00:09:19.371 Volatile Memory Backup: OK 00:09:19.371 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.371 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.371 Available Spare: 0% 00:09:19.371 Available Spare Threshold: 0% 00:09:19.371 Life Percentage Used: 0% 00:09:19.371 Data Units Read: 1041 00:09:19.371 Data Units Written: 913 00:09:19.371 Host Read Commands: 50002 00:09:19.371 Host Write Commands: 48839 00:09:19.371 Controller Busy Time: 0 minutes 00:09:19.371 Power Cycles: 0 00:09:19.371 Power On Hours: 0 hours 00:09:19.371 Unsafe Shutdowns: 0 00:09:19.371 Unrecoverable Media Errors: 0 00:09:19.371 Lifetime Error Log Entries: 0 00:09:19.371 Warning Temperature Time: 0 minutes 00:09:19.372 Critical Temperature Time: 0 minutes 00:09:19.372 00:09:19.372 Number of Queues 00:09:19.372 ================ 00:09:19.372 Number of I/O Submission Queues: 64 00:09:19.372 Number of I/O Completion Queues: 64 00:09:19.372 00:09:19.372 ZNS Specific Controller Data 00:09:19.372 ============================ 00:09:19.372 Zone Append Size Limit: 0 00:09:19.372 00:09:19.372 00:09:19.372 Active Namespaces 00:09:19.372 ================= 00:09:19.372 Namespace ID:1 00:09:19.372 Error Recovery Timeout: Unlimited 00:09:19.372 Command Set Identifier: NVM (00h) 00:09:19.372 Deallocate: Supported 00:09:19.372 Deallocated/Unwritten Error: Supported 00:09:19.372 Deallocated Read Value: All 0x00 00:09:19.372 Deallocate in Write Zeroes: Not Supported 00:09:19.372 Deallocated Guard Field: 0xFFFF 00:09:19.372 Flush: Supported 00:09:19.372 Reservation: Not Supported 00:09:19.372 Namespace Sharing Capabilities: Private 00:09:19.372 Size (in LBAs): 1310720 (5GiB) 00:09:19.372 Capacity (in LBAs): 1310720 (5GiB) 00:09:19.372 Utilization (in LBAs): 1310720 (5GiB) 00:09:19.372 Thin Provisioning: Not Supported 00:09:19.372 Per-NS Atomic Units: No 00:09:19.372 Maximum Single Source Range Length: 128 00:09:19.372 Maximum Copy Length: 128 00:09:19.372 Maximum Source Range Count: 128 00:09:19.372 NGUID/EUI64 Never Reused: No 00:09:19.372 Namespace Write Protected: No 00:09:19.372 Number of LBA Formats: 8 00:09:19.372 Current LBA Format: LBA Format #04 00:09:19.372 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.372 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.372 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.372 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.372 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.372 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.372 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.372 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.372 00:09:19.372 NVM Specific Namespace Data 00:09:19.372 =========================== 00:09:19.372 Logical Block Storage Tag Mask: 0 00:09:19.372 Protection Information Capabilities: 00:09:19.372 16b Guard Protection Information Storage Tag Support: No 00:09:19.372 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.372 Storage Tag Check Read Support: No 00:09:19.372 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.372 11:21:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:19.372 11:21:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:19.631 ===================================================== 00:09:19.631 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:19.631 ===================================================== 00:09:19.631 Controller Capabilities/Features 00:09:19.631 ================================ 00:09:19.631 Vendor ID: 1b36 00:09:19.631 Subsystem Vendor ID: 1af4 00:09:19.631 Serial Number: 12342 00:09:19.631 Model Number: QEMU NVMe Ctrl 00:09:19.631 Firmware Version: 8.0.0 00:09:19.631 Recommended Arb Burst: 6 00:09:19.631 IEEE OUI Identifier: 00 54 52 00:09:19.631 Multi-path I/O 00:09:19.631 May have multiple subsystem ports: No 00:09:19.631 May have multiple controllers: No 00:09:19.631 Associated with SR-IOV VF: No 00:09:19.631 Max Data Transfer Size: 524288 00:09:19.631 Max Number of Namespaces: 256 00:09:19.631 Max Number of I/O Queues: 64 00:09:19.631 NVMe Specification Version (VS): 1.4 00:09:19.631 NVMe Specification Version (Identify): 1.4 00:09:19.631 Maximum Queue Entries: 2048 00:09:19.631 Contiguous Queues Required: Yes 00:09:19.631 Arbitration Mechanisms Supported 00:09:19.631 Weighted Round Robin: Not Supported 00:09:19.631 Vendor Specific: Not Supported 00:09:19.631 Reset Timeout: 7500 ms 00:09:19.631 Doorbell Stride: 4 bytes 00:09:19.631 NVM Subsystem Reset: Not Supported 00:09:19.631 Command Sets Supported 00:09:19.631 NVM Command Set: Supported 00:09:19.631 Boot Partition: Not Supported 00:09:19.631 Memory Page Size Minimum: 4096 bytes 00:09:19.631 Memory Page Size Maximum: 65536 bytes 00:09:19.631 Persistent Memory Region: Not Supported 00:09:19.631 Optional Asynchronous Events Supported 00:09:19.631 Namespace Attribute Notices: Supported 00:09:19.631 Firmware Activation Notices: Not Supported 00:09:19.631 ANA Change Notices: Not Supported 00:09:19.631 PLE Aggregate Log Change Notices: Not Supported 00:09:19.631 LBA Status Info Alert Notices: Not Supported 00:09:19.631 EGE Aggregate Log Change Notices: Not Supported 00:09:19.631 Normal NVM Subsystem Shutdown event: Not Supported 00:09:19.631 Zone Descriptor Change Notices: Not Supported 00:09:19.631 Discovery Log Change Notices: Not Supported 00:09:19.631 Controller Attributes 00:09:19.631 128-bit Host Identifier: Not Supported 00:09:19.631 Non-Operational Permissive Mode: Not Supported 00:09:19.631 NVM Sets: Not Supported 00:09:19.631 Read Recovery Levels: Not Supported 00:09:19.631 Endurance Groups: Not Supported 00:09:19.631 Predictable Latency Mode: Not Supported 00:09:19.631 Traffic Based Keep ALive: Not Supported 00:09:19.631 Namespace Granularity: Not Supported 00:09:19.631 SQ Associations: Not Supported 00:09:19.631 UUID List: Not Supported 00:09:19.631 Multi-Domain Subsystem: Not Supported 00:09:19.631 Fixed Capacity Management: Not Supported 00:09:19.631 Variable Capacity Management: Not Supported 00:09:19.631 Delete Endurance Group: Not Supported 00:09:19.631 Delete NVM Set: Not Supported 00:09:19.631 Extended LBA Formats Supported: Supported 00:09:19.631 Flexible Data Placement Supported: Not Supported 00:09:19.631 00:09:19.631 Controller Memory Buffer Support 00:09:19.631 ================================ 00:09:19.631 Supported: No 00:09:19.631 00:09:19.631 Persistent Memory Region Support 00:09:19.631 ================================ 00:09:19.631 Supported: No 00:09:19.631 00:09:19.631 Admin Command Set Attributes 00:09:19.631 ============================ 00:09:19.631 Security Send/Receive: Not Supported 00:09:19.631 Format NVM: Supported 00:09:19.631 Firmware Activate/Download: Not Supported 00:09:19.631 Namespace Management: Supported 00:09:19.631 Device Self-Test: Not Supported 00:09:19.631 Directives: Supported 00:09:19.631 NVMe-MI: Not Supported 00:09:19.631 Virtualization Management: Not Supported 00:09:19.631 Doorbell Buffer Config: Supported 00:09:19.631 Get LBA Status Capability: Not Supported 00:09:19.631 Command & Feature Lockdown Capability: Not Supported 00:09:19.631 Abort Command Limit: 4 00:09:19.631 Async Event Request Limit: 4 00:09:19.631 Number of Firmware Slots: N/A 00:09:19.631 Firmware Slot 1 Read-Only: N/A 00:09:19.631 Firmware Activation Without Reset: N/A 00:09:19.631 Multiple Update Detection Support: N/A 00:09:19.631 Firmware Update Granularity: No Information Provided 00:09:19.631 Per-Namespace SMART Log: Yes 00:09:19.631 Asymmetric Namespace Access Log Page: Not Supported 00:09:19.631 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:19.631 Command Effects Log Page: Supported 00:09:19.631 Get Log Page Extended Data: Supported 00:09:19.631 Telemetry Log Pages: Not Supported 00:09:19.631 Persistent Event Log Pages: Not Supported 00:09:19.631 Supported Log Pages Log Page: May Support 00:09:19.631 Commands Supported & Effects Log Page: Not Supported 00:09:19.631 Feature Identifiers & Effects Log Page:May Support 00:09:19.631 NVMe-MI Commands & Effects Log Page: May Support 00:09:19.631 Data Area 4 for Telemetry Log: Not Supported 00:09:19.631 Error Log Page Entries Supported: 1 00:09:19.631 Keep Alive: Not Supported 00:09:19.631 00:09:19.631 NVM Command Set Attributes 00:09:19.631 ========================== 00:09:19.631 Submission Queue Entry Size 00:09:19.631 Max: 64 00:09:19.631 Min: 64 00:09:19.631 Completion Queue Entry Size 00:09:19.631 Max: 16 00:09:19.631 Min: 16 00:09:19.631 Number of Namespaces: 256 00:09:19.631 Compare Command: Supported 00:09:19.631 Write Uncorrectable Command: Not Supported 00:09:19.631 Dataset Management Command: Supported 00:09:19.631 Write Zeroes Command: Supported 00:09:19.631 Set Features Save Field: Supported 00:09:19.631 Reservations: Not Supported 00:09:19.631 Timestamp: Supported 00:09:19.631 Copy: Supported 00:09:19.631 Volatile Write Cache: Present 00:09:19.631 Atomic Write Unit (Normal): 1 00:09:19.631 Atomic Write Unit (PFail): 1 00:09:19.631 Atomic Compare & Write Unit: 1 00:09:19.631 Fused Compare & Write: Not Supported 00:09:19.631 Scatter-Gather List 00:09:19.631 SGL Command Set: Supported 00:09:19.631 SGL Keyed: Not Supported 00:09:19.631 SGL Bit Bucket Descriptor: Not Supported 00:09:19.631 SGL Metadata Pointer: Not Supported 00:09:19.631 Oversized SGL: Not Supported 00:09:19.631 SGL Metadata Address: Not Supported 00:09:19.631 SGL Offset: Not Supported 00:09:19.631 Transport SGL Data Block: Not Supported 00:09:19.631 Replay Protected Memory Block: Not Supported 00:09:19.631 00:09:19.631 Firmware Slot Information 00:09:19.631 ========================= 00:09:19.631 Active slot: 1 00:09:19.631 Slot 1 Firmware Revision: 1.0 00:09:19.631 00:09:19.631 00:09:19.631 Commands Supported and Effects 00:09:19.631 ============================== 00:09:19.631 Admin Commands 00:09:19.631 -------------- 00:09:19.631 Delete I/O Submission Queue (00h): Supported 00:09:19.631 Create I/O Submission Queue (01h): Supported 00:09:19.631 Get Log Page (02h): Supported 00:09:19.631 Delete I/O Completion Queue (04h): Supported 00:09:19.631 Create I/O Completion Queue (05h): Supported 00:09:19.631 Identify (06h): Supported 00:09:19.631 Abort (08h): Supported 00:09:19.631 Set Features (09h): Supported 00:09:19.631 Get Features (0Ah): Supported 00:09:19.631 Asynchronous Event Request (0Ch): Supported 00:09:19.631 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:19.631 Directive Send (19h): Supported 00:09:19.631 Directive Receive (1Ah): Supported 00:09:19.631 Virtualization Management (1Ch): Supported 00:09:19.631 Doorbell Buffer Config (7Ch): Supported 00:09:19.631 Format NVM (80h): Supported LBA-Change 00:09:19.631 I/O Commands 00:09:19.631 ------------ 00:09:19.632 Flush (00h): Supported LBA-Change 00:09:19.632 Write (01h): Supported LBA-Change 00:09:19.632 Read (02h): Supported 00:09:19.632 Compare (05h): Supported 00:09:19.632 Write Zeroes (08h): Supported LBA-Change 00:09:19.632 Dataset Management (09h): Supported LBA-Change 00:09:19.632 Unknown (0Ch): Supported 00:09:19.632 Unknown (12h): Supported 00:09:19.632 Copy (19h): Supported LBA-Change 00:09:19.632 Unknown (1Dh): Supported LBA-Change 00:09:19.632 00:09:19.632 Error Log 00:09:19.632 ========= 00:09:19.632 00:09:19.632 Arbitration 00:09:19.632 =========== 00:09:19.632 Arbitration Burst: no limit 00:09:19.632 00:09:19.632 Power Management 00:09:19.632 ================ 00:09:19.632 Number of Power States: 1 00:09:19.632 Current Power State: Power State #0 00:09:19.632 Power State #0: 00:09:19.632 Max Power: 25.00 W 00:09:19.632 Non-Operational State: Operational 00:09:19.632 Entry Latency: 16 microseconds 00:09:19.632 Exit Latency: 4 microseconds 00:09:19.632 Relative Read Throughput: 0 00:09:19.632 Relative Read Latency: 0 00:09:19.632 Relative Write Throughput: 0 00:09:19.632 Relative Write Latency: 0 00:09:19.632 Idle Power: Not Reported 00:09:19.632 Active Power: Not Reported 00:09:19.632 Non-Operational Permissive Mode: Not Supported 00:09:19.632 00:09:19.632 Health Information 00:09:19.632 ================== 00:09:19.632 Critical Warnings: 00:09:19.632 Available Spare Space: OK 00:09:19.632 Temperature: OK 00:09:19.632 Device Reliability: OK 00:09:19.632 Read Only: No 00:09:19.632 Volatile Memory Backup: OK 00:09:19.632 Current Temperature: 323 Kelvin (50 Celsius) 00:09:19.632 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:19.632 Available Spare: 0% 00:09:19.632 Available Spare Threshold: 0% 00:09:19.632 Life Percentage Used: 0% 00:09:19.632 Data Units Read: 2137 00:09:19.632 Data Units Written: 1925 00:09:19.632 Host Read Commands: 101718 00:09:19.632 Host Write Commands: 99987 00:09:19.632 Controller Busy Time: 0 minutes 00:09:19.632 Power Cycles: 0 00:09:19.632 Power On Hours: 0 hours 00:09:19.632 Unsafe Shutdowns: 0 00:09:19.632 Unrecoverable Media Errors: 0 00:09:19.632 Lifetime Error Log Entries: 0 00:09:19.632 Warning Temperature Time: 0 minutes 00:09:19.632 Critical Temperature Time: 0 minutes 00:09:19.632 00:09:19.632 Number of Queues 00:09:19.632 ================ 00:09:19.632 Number of I/O Submission Queues: 64 00:09:19.632 Number of I/O Completion Queues: 64 00:09:19.632 00:09:19.632 ZNS Specific Controller Data 00:09:19.632 ============================ 00:09:19.632 Zone Append Size Limit: 0 00:09:19.632 00:09:19.632 00:09:19.632 Active Namespaces 00:09:19.632 ================= 00:09:19.632 Namespace ID:1 00:09:19.632 Error Recovery Timeout: Unlimited 00:09:19.632 Command Set Identifier: NVM (00h) 00:09:19.632 Deallocate: Supported 00:09:19.632 Deallocated/Unwritten Error: Supported 00:09:19.632 Deallocated Read Value: All 0x00 00:09:19.632 Deallocate in Write Zeroes: Not Supported 00:09:19.632 Deallocated Guard Field: 0xFFFF 00:09:19.632 Flush: Supported 00:09:19.632 Reservation: Not Supported 00:09:19.632 Namespace Sharing Capabilities: Private 00:09:19.632 Size (in LBAs): 1048576 (4GiB) 00:09:19.632 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.632 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.632 Thin Provisioning: Not Supported 00:09:19.632 Per-NS Atomic Units: No 00:09:19.632 Maximum Single Source Range Length: 128 00:09:19.632 Maximum Copy Length: 128 00:09:19.632 Maximum Source Range Count: 128 00:09:19.632 NGUID/EUI64 Never Reused: No 00:09:19.632 Namespace Write Protected: No 00:09:19.632 Number of LBA Formats: 8 00:09:19.632 Current LBA Format: LBA Format #04 00:09:19.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.632 00:09:19.632 NVM Specific Namespace Data 00:09:19.632 =========================== 00:09:19.632 Logical Block Storage Tag Mask: 0 00:09:19.632 Protection Information Capabilities: 00:09:19.632 16b Guard Protection Information Storage Tag Support: No 00:09:19.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.632 Storage Tag Check Read Support: No 00:09:19.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Namespace ID:2 00:09:19.632 Error Recovery Timeout: Unlimited 00:09:19.632 Command Set Identifier: NVM (00h) 00:09:19.632 Deallocate: Supported 00:09:19.632 Deallocated/Unwritten Error: Supported 00:09:19.632 Deallocated Read Value: All 0x00 00:09:19.632 Deallocate in Write Zeroes: Not Supported 00:09:19.632 Deallocated Guard Field: 0xFFFF 00:09:19.632 Flush: Supported 00:09:19.632 Reservation: Not Supported 00:09:19.632 Namespace Sharing Capabilities: Private 00:09:19.632 Size (in LBAs): 1048576 (4GiB) 00:09:19.632 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.632 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.632 Thin Provisioning: Not Supported 00:09:19.632 Per-NS Atomic Units: No 00:09:19.632 Maximum Single Source Range Length: 128 00:09:19.632 Maximum Copy Length: 128 00:09:19.632 Maximum Source Range Count: 128 00:09:19.632 NGUID/EUI64 Never Reused: No 00:09:19.632 Namespace Write Protected: No 00:09:19.632 Number of LBA Formats: 8 00:09:19.632 Current LBA Format: LBA Format #04 00:09:19.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.632 00:09:19.632 NVM Specific Namespace Data 00:09:19.632 =========================== 00:09:19.632 Logical Block Storage Tag Mask: 0 00:09:19.632 Protection Information Capabilities: 00:09:19.632 16b Guard Protection Information Storage Tag Support: No 00:09:19.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.632 Storage Tag Check Read Support: No 00:09:19.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Namespace ID:3 00:09:19.632 Error Recovery Timeout: Unlimited 00:09:19.632 Command Set Identifier: NVM (00h) 00:09:19.632 Deallocate: Supported 00:09:19.632 Deallocated/Unwritten Error: Supported 00:09:19.632 Deallocated Read Value: All 0x00 00:09:19.632 Deallocate in Write Zeroes: Not Supported 00:09:19.632 Deallocated Guard Field: 0xFFFF 00:09:19.632 Flush: Supported 00:09:19.632 Reservation: Not Supported 00:09:19.632 Namespace Sharing Capabilities: Private 00:09:19.632 Size (in LBAs): 1048576 (4GiB) 00:09:19.632 Capacity (in LBAs): 1048576 (4GiB) 00:09:19.632 Utilization (in LBAs): 1048576 (4GiB) 00:09:19.632 Thin Provisioning: Not Supported 00:09:19.632 Per-NS Atomic Units: No 00:09:19.632 Maximum Single Source Range Length: 128 00:09:19.632 Maximum Copy Length: 128 00:09:19.632 Maximum Source Range Count: 128 00:09:19.632 NGUID/EUI64 Never Reused: No 00:09:19.632 Namespace Write Protected: No 00:09:19.632 Number of LBA Formats: 8 00:09:19.632 Current LBA Format: LBA Format #04 00:09:19.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:19.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:19.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:19.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:19.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:19.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:19.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:19.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:19.632 00:09:19.632 NVM Specific Namespace Data 00:09:19.632 =========================== 00:09:19.632 Logical Block Storage Tag Mask: 0 00:09:19.632 Protection Information Capabilities: 00:09:19.632 16b Guard Protection Information Storage Tag Support: No 00:09:19.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:19.632 Storage Tag Check Read Support: No 00:09:19.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:19.632 11:21:02 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:19.632 11:21:02 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:20.198 ===================================================== 00:09:20.198 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.198 ===================================================== 00:09:20.198 Controller Capabilities/Features 00:09:20.198 ================================ 00:09:20.198 Vendor ID: 1b36 00:09:20.198 Subsystem Vendor ID: 1af4 00:09:20.198 Serial Number: 12343 00:09:20.198 Model Number: QEMU NVMe Ctrl 00:09:20.198 Firmware Version: 8.0.0 00:09:20.198 Recommended Arb Burst: 6 00:09:20.198 IEEE OUI Identifier: 00 54 52 00:09:20.198 Multi-path I/O 00:09:20.198 May have multiple subsystem ports: No 00:09:20.198 May have multiple controllers: Yes 00:09:20.198 Associated with SR-IOV VF: No 00:09:20.198 Max Data Transfer Size: 524288 00:09:20.198 Max Number of Namespaces: 256 00:09:20.198 Max Number of I/O Queues: 64 00:09:20.198 NVMe Specification Version (VS): 1.4 00:09:20.198 NVMe Specification Version (Identify): 1.4 00:09:20.198 Maximum Queue Entries: 2048 00:09:20.198 Contiguous Queues Required: Yes 00:09:20.198 Arbitration Mechanisms Supported 00:09:20.198 Weighted Round Robin: Not Supported 00:09:20.198 Vendor Specific: Not Supported 00:09:20.198 Reset Timeout: 7500 ms 00:09:20.198 Doorbell Stride: 4 bytes 00:09:20.198 NVM Subsystem Reset: Not Supported 00:09:20.198 Command Sets Supported 00:09:20.198 NVM Command Set: Supported 00:09:20.198 Boot Partition: Not Supported 00:09:20.198 Memory Page Size Minimum: 4096 bytes 00:09:20.199 Memory Page Size Maximum: 65536 bytes 00:09:20.199 Persistent Memory Region: Not Supported 00:09:20.199 Optional Asynchronous Events Supported 00:09:20.199 Namespace Attribute Notices: Supported 00:09:20.199 Firmware Activation Notices: Not Supported 00:09:20.199 ANA Change Notices: Not Supported 00:09:20.199 PLE Aggregate Log Change Notices: Not Supported 00:09:20.199 LBA Status Info Alert Notices: Not Supported 00:09:20.199 EGE Aggregate Log Change Notices: Not Supported 00:09:20.199 Normal NVM Subsystem Shutdown event: Not Supported 00:09:20.199 Zone Descriptor Change Notices: Not Supported 00:09:20.199 Discovery Log Change Notices: Not Supported 00:09:20.199 Controller Attributes 00:09:20.199 128-bit Host Identifier: Not Supported 00:09:20.199 Non-Operational Permissive Mode: Not Supported 00:09:20.199 NVM Sets: Not Supported 00:09:20.199 Read Recovery Levels: Not Supported 00:09:20.199 Endurance Groups: Supported 00:09:20.199 Predictable Latency Mode: Not Supported 00:09:20.199 Traffic Based Keep ALive: Not Supported 00:09:20.199 Namespace Granularity: Not Supported 00:09:20.199 SQ Associations: Not Supported 00:09:20.199 UUID List: Not Supported 00:09:20.199 Multi-Domain Subsystem: Not Supported 00:09:20.199 Fixed Capacity Management: Not Supported 00:09:20.199 Variable Capacity Management: Not Supported 00:09:20.199 Delete Endurance Group: Not Supported 00:09:20.199 Delete NVM Set: Not Supported 00:09:20.199 Extended LBA Formats Supported: Supported 00:09:20.199 Flexible Data Placement Supported: Supported 00:09:20.199 00:09:20.199 Controller Memory Buffer Support 00:09:20.199 ================================ 00:09:20.199 Supported: No 00:09:20.199 00:09:20.199 Persistent Memory Region Support 00:09:20.199 ================================ 00:09:20.199 Supported: No 00:09:20.199 00:09:20.199 Admin Command Set Attributes 00:09:20.199 ============================ 00:09:20.199 Security Send/Receive: Not Supported 00:09:20.199 Format NVM: Supported 00:09:20.199 Firmware Activate/Download: Not Supported 00:09:20.199 Namespace Management: Supported 00:09:20.199 Device Self-Test: Not Supported 00:09:20.199 Directives: Supported 00:09:20.199 NVMe-MI: Not Supported 00:09:20.199 Virtualization Management: Not Supported 00:09:20.199 Doorbell Buffer Config: Supported 00:09:20.199 Get LBA Status Capability: Not Supported 00:09:20.199 Command & Feature Lockdown Capability: Not Supported 00:09:20.199 Abort Command Limit: 4 00:09:20.199 Async Event Request Limit: 4 00:09:20.199 Number of Firmware Slots: N/A 00:09:20.199 Firmware Slot 1 Read-Only: N/A 00:09:20.199 Firmware Activation Without Reset: N/A 00:09:20.199 Multiple Update Detection Support: N/A 00:09:20.199 Firmware Update Granularity: No Information Provided 00:09:20.199 Per-Namespace SMART Log: Yes 00:09:20.199 Asymmetric Namespace Access Log Page: Not Supported 00:09:20.199 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:20.199 Command Effects Log Page: Supported 00:09:20.199 Get Log Page Extended Data: Supported 00:09:20.199 Telemetry Log Pages: Not Supported 00:09:20.199 Persistent Event Log Pages: Not Supported 00:09:20.199 Supported Log Pages Log Page: May Support 00:09:20.199 Commands Supported & Effects Log Page: Not Supported 00:09:20.199 Feature Identifiers & Effects Log Page:May Support 00:09:20.199 NVMe-MI Commands & Effects Log Page: May Support 00:09:20.199 Data Area 4 for Telemetry Log: Not Supported 00:09:20.199 Error Log Page Entries Supported: 1 00:09:20.199 Keep Alive: Not Supported 00:09:20.199 00:09:20.199 NVM Command Set Attributes 00:09:20.199 ========================== 00:09:20.199 Submission Queue Entry Size 00:09:20.199 Max: 64 00:09:20.199 Min: 64 00:09:20.199 Completion Queue Entry Size 00:09:20.199 Max: 16 00:09:20.199 Min: 16 00:09:20.199 Number of Namespaces: 256 00:09:20.199 Compare Command: Supported 00:09:20.199 Write Uncorrectable Command: Not Supported 00:09:20.199 Dataset Management Command: Supported 00:09:20.199 Write Zeroes Command: Supported 00:09:20.199 Set Features Save Field: Supported 00:09:20.199 Reservations: Not Supported 00:09:20.199 Timestamp: Supported 00:09:20.199 Copy: Supported 00:09:20.199 Volatile Write Cache: Present 00:09:20.199 Atomic Write Unit (Normal): 1 00:09:20.199 Atomic Write Unit (PFail): 1 00:09:20.199 Atomic Compare & Write Unit: 1 00:09:20.199 Fused Compare & Write: Not Supported 00:09:20.199 Scatter-Gather List 00:09:20.199 SGL Command Set: Supported 00:09:20.199 SGL Keyed: Not Supported 00:09:20.199 SGL Bit Bucket Descriptor: Not Supported 00:09:20.199 SGL Metadata Pointer: Not Supported 00:09:20.199 Oversized SGL: Not Supported 00:09:20.199 SGL Metadata Address: Not Supported 00:09:20.199 SGL Offset: Not Supported 00:09:20.199 Transport SGL Data Block: Not Supported 00:09:20.199 Replay Protected Memory Block: Not Supported 00:09:20.199 00:09:20.199 Firmware Slot Information 00:09:20.199 ========================= 00:09:20.199 Active slot: 1 00:09:20.199 Slot 1 Firmware Revision: 1.0 00:09:20.199 00:09:20.199 00:09:20.199 Commands Supported and Effects 00:09:20.199 ============================== 00:09:20.199 Admin Commands 00:09:20.199 -------------- 00:09:20.199 Delete I/O Submission Queue (00h): Supported 00:09:20.199 Create I/O Submission Queue (01h): Supported 00:09:20.199 Get Log Page (02h): Supported 00:09:20.199 Delete I/O Completion Queue (04h): Supported 00:09:20.199 Create I/O Completion Queue (05h): Supported 00:09:20.199 Identify (06h): Supported 00:09:20.199 Abort (08h): Supported 00:09:20.199 Set Features (09h): Supported 00:09:20.199 Get Features (0Ah): Supported 00:09:20.199 Asynchronous Event Request (0Ch): Supported 00:09:20.199 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:20.199 Directive Send (19h): Supported 00:09:20.199 Directive Receive (1Ah): Supported 00:09:20.199 Virtualization Management (1Ch): Supported 00:09:20.199 Doorbell Buffer Config (7Ch): Supported 00:09:20.199 Format NVM (80h): Supported LBA-Change 00:09:20.199 I/O Commands 00:09:20.199 ------------ 00:09:20.199 Flush (00h): Supported LBA-Change 00:09:20.199 Write (01h): Supported LBA-Change 00:09:20.199 Read (02h): Supported 00:09:20.199 Compare (05h): Supported 00:09:20.199 Write Zeroes (08h): Supported LBA-Change 00:09:20.199 Dataset Management (09h): Supported LBA-Change 00:09:20.199 Unknown (0Ch): Supported 00:09:20.199 Unknown (12h): Supported 00:09:20.199 Copy (19h): Supported LBA-Change 00:09:20.199 Unknown (1Dh): Supported LBA-Change 00:09:20.199 00:09:20.199 Error Log 00:09:20.199 ========= 00:09:20.199 00:09:20.199 Arbitration 00:09:20.199 =========== 00:09:20.199 Arbitration Burst: no limit 00:09:20.199 00:09:20.199 Power Management 00:09:20.199 ================ 00:09:20.199 Number of Power States: 1 00:09:20.199 Current Power State: Power State #0 00:09:20.199 Power State #0: 00:09:20.199 Max Power: 25.00 W 00:09:20.199 Non-Operational State: Operational 00:09:20.199 Entry Latency: 16 microseconds 00:09:20.199 Exit Latency: 4 microseconds 00:09:20.199 Relative Read Throughput: 0 00:09:20.199 Relative Read Latency: 0 00:09:20.199 Relative Write Throughput: 0 00:09:20.199 Relative Write Latency: 0 00:09:20.199 Idle Power: Not Reported 00:09:20.199 Active Power: Not Reported 00:09:20.199 Non-Operational Permissive Mode: Not Supported 00:09:20.199 00:09:20.199 Health Information 00:09:20.199 ================== 00:09:20.199 Critical Warnings: 00:09:20.199 Available Spare Space: OK 00:09:20.199 Temperature: OK 00:09:20.199 Device Reliability: OK 00:09:20.199 Read Only: No 00:09:20.199 Volatile Memory Backup: OK 00:09:20.199 Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.199 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:20.199 Available Spare: 0% 00:09:20.199 Available Spare Threshold: 0% 00:09:20.199 Life Percentage Used: 0% 00:09:20.199 Data Units Read: 785 00:09:20.199 Data Units Written: 714 00:09:20.199 Host Read Commands: 34527 00:09:20.199 Host Write Commands: 33950 00:09:20.199 Controller Busy Time: 0 minutes 00:09:20.199 Power Cycles: 0 00:09:20.199 Power On Hours: 0 hours 00:09:20.199 Unsafe Shutdowns: 0 00:09:20.199 Unrecoverable Media Errors: 0 00:09:20.199 Lifetime Error Log Entries: 0 00:09:20.199 Warning Temperature Time: 0 minutes 00:09:20.199 Critical Temperature Time: 0 minutes 00:09:20.199 00:09:20.199 Number of Queues 00:09:20.199 ================ 00:09:20.199 Number of I/O Submission Queues: 64 00:09:20.199 Number of I/O Completion Queues: 64 00:09:20.199 00:09:20.199 ZNS Specific Controller Data 00:09:20.199 ============================ 00:09:20.199 Zone Append Size Limit: 0 00:09:20.199 00:09:20.199 00:09:20.199 Active Namespaces 00:09:20.199 ================= 00:09:20.199 Namespace ID:1 00:09:20.199 Error Recovery Timeout: Unlimited 00:09:20.199 Command Set Identifier: NVM (00h) 00:09:20.200 Deallocate: Supported 00:09:20.200 Deallocated/Unwritten Error: Supported 00:09:20.200 Deallocated Read Value: All 0x00 00:09:20.200 Deallocate in Write Zeroes: Not Supported 00:09:20.200 Deallocated Guard Field: 0xFFFF 00:09:20.200 Flush: Supported 00:09:20.200 Reservation: Not Supported 00:09:20.200 Namespace Sharing Capabilities: Multiple Controllers 00:09:20.200 Size (in LBAs): 262144 (1GiB) 00:09:20.200 Capacity (in LBAs): 262144 (1GiB) 00:09:20.200 Utilization (in LBAs): 262144 (1GiB) 00:09:20.200 Thin Provisioning: Not Supported 00:09:20.200 Per-NS Atomic Units: No 00:09:20.200 Maximum Single Source Range Length: 128 00:09:20.200 Maximum Copy Length: 128 00:09:20.200 Maximum Source Range Count: 128 00:09:20.200 NGUID/EUI64 Never Reused: No 00:09:20.200 Namespace Write Protected: No 00:09:20.200 Endurance group ID: 1 00:09:20.200 Number of LBA Formats: 8 00:09:20.200 Current LBA Format: LBA Format #04 00:09:20.200 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:20.200 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:20.200 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:20.200 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:20.200 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:20.200 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:20.200 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:20.200 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:20.200 00:09:20.200 Get Feature FDP: 00:09:20.200 ================ 00:09:20.200 Enabled: Yes 00:09:20.200 FDP configuration index: 0 00:09:20.200 00:09:20.200 FDP configurations log page 00:09:20.200 =========================== 00:09:20.200 Number of FDP configurations: 1 00:09:20.200 Version: 0 00:09:20.200 Size: 112 00:09:20.200 FDP Configuration Descriptor: 0 00:09:20.200 Descriptor Size: 96 00:09:20.200 Reclaim Group Identifier format: 2 00:09:20.200 FDP Volatile Write Cache: Not Present 00:09:20.200 FDP Configuration: Valid 00:09:20.200 Vendor Specific Size: 0 00:09:20.200 Number of Reclaim Groups: 2 00:09:20.200 Number of Recalim Unit Handles: 8 00:09:20.200 Max Placement Identifiers: 128 00:09:20.200 Number of Namespaces Suppprted: 256 00:09:20.200 Reclaim unit Nominal Size: 6000000 bytes 00:09:20.200 Estimated Reclaim Unit Time Limit: Not Reported 00:09:20.200 RUH Desc #000: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #001: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #002: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #003: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #004: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #005: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #006: RUH Type: Initially Isolated 00:09:20.200 RUH Desc #007: RUH Type: Initially Isolated 00:09:20.200 00:09:20.200 FDP reclaim unit handle usage log page 00:09:20.200 ====================================== 00:09:20.200 Number of Reclaim Unit Handles: 8 00:09:20.200 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:20.200 RUH Usage Desc #001: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #002: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #003: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #004: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #005: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #006: RUH Attributes: Unused 00:09:20.200 RUH Usage Desc #007: RUH Attributes: Unused 00:09:20.200 00:09:20.200 FDP statistics log page 00:09:20.200 ======================= 00:09:20.200 Host bytes with metadata written: 454008832 00:09:20.200 Media bytes with metadata written: 454053888 00:09:20.200 Media bytes erased: 0 00:09:20.200 00:09:20.200 FDP events log page 00:09:20.200 =================== 00:09:20.200 Number of FDP events: 0 00:09:20.200 00:09:20.200 NVM Specific Namespace Data 00:09:20.200 =========================== 00:09:20.200 Logical Block Storage Tag Mask: 0 00:09:20.200 Protection Information Capabilities: 00:09:20.200 16b Guard Protection Information Storage Tag Support: No 00:09:20.200 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:20.200 Storage Tag Check Read Support: No 00:09:20.200 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:20.200 00:09:20.200 real 0m1.853s 00:09:20.200 user 0m0.763s 00:09:20.200 sys 0m0.885s 00:09:20.200 11:21:02 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:20.200 ************************************ 00:09:20.200 11:21:02 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 END TEST nvme_identify 00:09:20.200 ************************************ 00:09:20.200 11:21:02 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:20.200 11:21:02 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:20.200 11:21:02 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:20.200 11:21:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.200 ************************************ 00:09:20.200 START TEST nvme_perf 00:09:20.200 ************************************ 00:09:20.200 11:21:02 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:09:20.200 11:21:02 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:21.576 Initializing NVMe Controllers 00:09:21.576 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.576 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.576 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.576 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.576 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:21.576 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:21.576 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:21.576 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:21.576 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:21.576 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:21.576 Initialization complete. Launching workers. 00:09:21.576 ======================================================== 00:09:21.576 Latency(us) 00:09:21.576 Device Information : IOPS MiB/s Average min max 00:09:21.576 PCIE (0000:00:10.0) NSID 1 from core 0: 12335.66 144.56 10386.72 8362.68 48603.69 00:09:21.576 PCIE (0000:00:11.0) NSID 1 from core 0: 12335.66 144.56 10359.08 8453.92 45779.68 00:09:21.576 PCIE (0000:00:13.0) NSID 1 from core 0: 12335.66 144.56 10329.37 8391.60 43679.24 00:09:21.576 PCIE (0000:00:12.0) NSID 1 from core 0: 12335.66 144.56 10298.95 8416.73 40887.45 00:09:21.576 PCIE (0000:00:12.0) NSID 2 from core 0: 12335.66 144.56 10268.42 8385.46 38163.07 00:09:21.576 PCIE (0000:00:12.0) NSID 3 from core 0: 12335.66 144.56 10236.80 8400.03 35251.98 00:09:21.576 ======================================================== 00:09:21.576 Total : 74013.93 867.35 10313.22 8362.68 48603.69 00:09:21.576 00:09:21.576 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:21.576 ================================================================================= 00:09:21.576 1.00000% : 8638.836us 00:09:21.576 10.00000% : 8996.305us 00:09:21.576 25.00000% : 9294.196us 00:09:21.576 50.00000% : 9770.822us 00:09:21.576 75.00000% : 10604.916us 00:09:21.576 90.00000% : 11558.167us 00:09:21.576 95.00000% : 12690.153us 00:09:21.576 98.00000% : 14120.029us 00:09:21.576 99.00000% : 37176.785us 00:09:21.576 99.50000% : 45994.356us 00:09:21.576 99.90000% : 48139.171us 00:09:21.576 99.99000% : 48615.796us 00:09:21.576 99.99900% : 48615.796us 00:09:21.576 99.99990% : 48615.796us 00:09:21.576 99.99999% : 48615.796us 00:09:21.576 00:09:21.576 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:21.576 ================================================================================= 00:09:21.576 1.00000% : 8698.415us 00:09:21.576 10.00000% : 9055.884us 00:09:21.576 25.00000% : 9294.196us 00:09:21.576 50.00000% : 9770.822us 00:09:21.576 75.00000% : 10664.495us 00:09:21.576 90.00000% : 11439.011us 00:09:21.576 95.00000% : 12511.418us 00:09:21.576 98.00000% : 14120.029us 00:09:21.576 99.00000% : 35031.971us 00:09:21.576 99.50000% : 43372.916us 00:09:21.576 99.90000% : 45517.731us 00:09:21.576 99.99000% : 45756.044us 00:09:21.576 99.99900% : 45994.356us 00:09:21.576 99.99990% : 45994.356us 00:09:21.576 99.99999% : 45994.356us 00:09:21.576 00:09:21.576 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:21.576 ================================================================================= 00:09:21.576 1.00000% : 8698.415us 00:09:21.576 10.00000% : 9055.884us 00:09:21.576 25.00000% : 9294.196us 00:09:21.576 50.00000% : 9770.822us 00:09:21.576 75.00000% : 10664.495us 00:09:21.577 90.00000% : 11439.011us 00:09:21.577 95.00000% : 12511.418us 00:09:21.577 98.00000% : 14179.607us 00:09:21.577 99.00000% : 32648.844us 00:09:21.577 99.50000% : 41228.102us 00:09:21.577 99.90000% : 43372.916us 00:09:21.577 99.99000% : 43849.542us 00:09:21.577 99.99900% : 43849.542us 00:09:21.577 99.99990% : 43849.542us 00:09:21.577 99.99999% : 43849.542us 00:09:21.577 00:09:21.577 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:21.577 ================================================================================= 00:09:21.577 1.00000% : 8698.415us 00:09:21.577 10.00000% : 9055.884us 00:09:21.577 25.00000% : 9294.196us 00:09:21.577 50.00000% : 9711.244us 00:09:21.577 75.00000% : 10664.495us 00:09:21.577 90.00000% : 11439.011us 00:09:21.577 95.00000% : 12511.418us 00:09:21.577 98.00000% : 14000.873us 00:09:21.577 99.00000% : 30265.716us 00:09:21.577 99.50000% : 38368.349us 00:09:21.577 99.90000% : 40513.164us 00:09:21.577 99.99000% : 40989.789us 00:09:21.577 99.99900% : 40989.789us 00:09:21.577 99.99990% : 40989.789us 00:09:21.577 99.99999% : 40989.789us 00:09:21.577 00:09:21.577 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:21.577 ================================================================================= 00:09:21.577 1.00000% : 8698.415us 00:09:21.577 10.00000% : 9055.884us 00:09:21.577 25.00000% : 9294.196us 00:09:21.577 50.00000% : 9711.244us 00:09:21.577 75.00000% : 10604.916us 00:09:21.577 90.00000% : 11439.011us 00:09:21.577 95.00000% : 12630.575us 00:09:21.577 98.00000% : 14000.873us 00:09:21.577 99.00000% : 27405.964us 00:09:21.577 99.50000% : 35746.909us 00:09:21.577 99.90000% : 37891.724us 00:09:21.577 99.99000% : 38130.036us 00:09:21.577 99.99900% : 38368.349us 00:09:21.577 99.99990% : 38368.349us 00:09:21.577 99.99999% : 38368.349us 00:09:21.577 00:09:21.577 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:21.577 ================================================================================= 00:09:21.577 1.00000% : 8757.993us 00:09:21.577 10.00000% : 9055.884us 00:09:21.577 25.00000% : 9294.196us 00:09:21.577 50.00000% : 9711.244us 00:09:21.577 75.00000% : 10604.916us 00:09:21.577 90.00000% : 11439.011us 00:09:21.577 95.00000% : 12511.418us 00:09:21.577 98.00000% : 14000.873us 00:09:21.577 99.00000% : 24665.367us 00:09:21.577 99.50000% : 32887.156us 00:09:21.577 99.90000% : 34793.658us 00:09:21.577 99.99000% : 35270.284us 00:09:21.577 99.99900% : 35270.284us 00:09:21.577 99.99990% : 35270.284us 00:09:21.577 99.99999% : 35270.284us 00:09:21.577 00:09:21.577 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:21.577 ============================================================================== 00:09:21.577 Range in us Cumulative IO count 00:09:21.577 8340.945 - 8400.524: 0.0567% ( 7) 00:09:21.577 8400.524 - 8460.102: 0.2024% ( 18) 00:09:21.577 8460.102 - 8519.680: 0.4048% ( 25) 00:09:21.577 8519.680 - 8579.258: 0.7205% ( 39) 00:09:21.577 8579.258 - 8638.836: 1.3358% ( 76) 00:09:21.577 8638.836 - 8698.415: 2.2749% ( 116) 00:09:21.577 8698.415 - 8757.993: 3.6512% ( 170) 00:09:21.577 8757.993 - 8817.571: 5.2137% ( 193) 00:09:21.577 8817.571 - 8877.149: 7.2296% ( 249) 00:09:21.577 8877.149 - 8936.727: 9.4964% ( 280) 00:09:21.577 8936.727 - 8996.305: 11.8928% ( 296) 00:09:21.577 8996.305 - 9055.884: 14.3863% ( 308) 00:09:21.577 9055.884 - 9115.462: 17.1713% ( 344) 00:09:21.577 9115.462 - 9175.040: 20.0210% ( 352) 00:09:21.577 9175.040 - 9234.618: 23.0246% ( 371) 00:09:21.577 9234.618 - 9294.196: 26.0363% ( 372) 00:09:21.577 9294.196 - 9353.775: 29.1532% ( 385) 00:09:21.577 9353.775 - 9413.353: 32.2944% ( 388) 00:09:21.577 9413.353 - 9472.931: 35.3384% ( 376) 00:09:21.577 9472.931 - 9532.509: 38.5282% ( 394) 00:09:21.577 9532.509 - 9592.087: 41.7341% ( 396) 00:09:21.577 9592.087 - 9651.665: 44.7377% ( 371) 00:09:21.577 9651.665 - 9711.244: 47.7817% ( 376) 00:09:21.577 9711.244 - 9770.822: 50.5748% ( 345) 00:09:21.577 9770.822 - 9830.400: 53.2222% ( 327) 00:09:21.577 9830.400 - 9889.978: 55.5942% ( 293) 00:09:21.577 9889.978 - 9949.556: 57.5939% ( 247) 00:09:21.577 9949.556 - 10009.135: 59.6584% ( 255) 00:09:21.577 10009.135 - 10068.713: 61.3666% ( 211) 00:09:21.577 10068.713 - 10128.291: 63.0667% ( 210) 00:09:21.577 10128.291 - 10187.869: 64.7830% ( 212) 00:09:21.577 10187.869 - 10247.447: 66.4184% ( 202) 00:09:21.577 10247.447 - 10307.025: 67.8837% ( 181) 00:09:21.577 10307.025 - 10366.604: 69.3653% ( 183) 00:09:21.577 10366.604 - 10426.182: 70.7821% ( 175) 00:09:21.577 10426.182 - 10485.760: 72.1907% ( 174) 00:09:21.577 10485.760 - 10545.338: 73.7128% ( 188) 00:09:21.577 10545.338 - 10604.916: 75.0081% ( 160) 00:09:21.577 10604.916 - 10664.495: 76.3844% ( 170) 00:09:21.577 10664.495 - 10724.073: 77.6635% ( 158) 00:09:21.577 10724.073 - 10783.651: 78.9994% ( 165) 00:09:21.577 10783.651 - 10843.229: 80.2380% ( 153) 00:09:21.577 10843.229 - 10902.807: 81.4443% ( 149) 00:09:21.577 10902.807 - 10962.385: 82.6506% ( 149) 00:09:21.577 10962.385 - 11021.964: 83.7840% ( 140) 00:09:21.577 11021.964 - 11081.542: 84.9822% ( 148) 00:09:21.577 11081.542 - 11141.120: 85.9861% ( 124) 00:09:21.577 11141.120 - 11200.698: 86.8604% ( 108) 00:09:21.577 11200.698 - 11260.276: 87.6295% ( 95) 00:09:21.577 11260.276 - 11319.855: 88.3501% ( 89) 00:09:21.577 11319.855 - 11379.433: 88.9249% ( 71) 00:09:21.577 11379.433 - 11439.011: 89.4673% ( 67) 00:09:21.577 11439.011 - 11498.589: 89.8316% ( 45) 00:09:21.577 11498.589 - 11558.167: 90.2931% ( 57) 00:09:21.577 11558.167 - 11617.745: 90.6817% ( 48) 00:09:21.577 11617.745 - 11677.324: 91.1593% ( 59) 00:09:21.577 11677.324 - 11736.902: 91.4670% ( 38) 00:09:21.577 11736.902 - 11796.480: 91.8151% ( 43) 00:09:21.577 11796.480 - 11856.058: 92.0823% ( 33) 00:09:21.577 11856.058 - 11915.636: 92.4142% ( 41) 00:09:21.577 11915.636 - 11975.215: 92.6652% ( 31) 00:09:21.577 11975.215 - 12034.793: 92.8999% ( 29) 00:09:21.577 12034.793 - 12094.371: 93.1428% ( 30) 00:09:21.577 12094.371 - 12153.949: 93.3614% ( 27) 00:09:21.577 12153.949 - 12213.527: 93.5638% ( 25) 00:09:21.577 12213.527 - 12273.105: 93.8148% ( 31) 00:09:21.577 12273.105 - 12332.684: 93.9848% ( 21) 00:09:21.577 12332.684 - 12392.262: 94.2277% ( 30) 00:09:21.577 12392.262 - 12451.840: 94.4139% ( 23) 00:09:21.577 12451.840 - 12511.418: 94.6001% ( 23) 00:09:21.577 12511.418 - 12570.996: 94.7863% ( 23) 00:09:21.577 12570.996 - 12630.575: 94.9563% ( 21) 00:09:21.577 12630.575 - 12690.153: 95.1182% ( 20) 00:09:21.577 12690.153 - 12749.731: 95.1911% ( 9) 00:09:21.577 12749.731 - 12809.309: 95.3206% ( 16) 00:09:21.577 12809.309 - 12868.887: 95.4258% ( 13) 00:09:21.577 12868.887 - 12928.465: 95.5635% ( 17) 00:09:21.577 12928.465 - 12988.044: 95.6849% ( 15) 00:09:21.577 12988.044 - 13047.622: 95.8306% ( 18) 00:09:21.577 13047.622 - 13107.200: 95.9602% ( 16) 00:09:21.577 13107.200 - 13166.778: 96.0573% ( 12) 00:09:21.577 13166.778 - 13226.356: 96.1788% ( 15) 00:09:21.577 13226.356 - 13285.935: 96.2840% ( 13) 00:09:21.577 13285.935 - 13345.513: 96.4297% ( 18) 00:09:21.577 13345.513 - 13405.091: 96.5512% ( 15) 00:09:21.577 13405.091 - 13464.669: 96.6726% ( 15) 00:09:21.577 13464.669 - 13524.247: 96.7698% ( 12) 00:09:21.577 13524.247 - 13583.825: 96.8912% ( 15) 00:09:21.577 13583.825 - 13643.404: 97.0288% ( 17) 00:09:21.577 13643.404 - 13702.982: 97.1503% ( 15) 00:09:21.577 13702.982 - 13762.560: 97.2960% ( 18) 00:09:21.577 13762.560 - 13822.138: 97.4012% ( 13) 00:09:21.577 13822.138 - 13881.716: 97.5146% ( 14) 00:09:21.577 13881.716 - 13941.295: 97.6441% ( 16) 00:09:21.577 13941.295 - 14000.873: 97.7574% ( 14) 00:09:21.577 14000.873 - 14060.451: 97.8384% ( 10) 00:09:21.577 14060.451 - 14120.029: 98.0084% ( 21) 00:09:21.577 14120.029 - 14179.607: 98.1218% ( 14) 00:09:21.577 14179.607 - 14239.185: 98.2513% ( 16) 00:09:21.577 14239.185 - 14298.764: 98.3484% ( 12) 00:09:21.577 14298.764 - 14358.342: 98.4699% ( 15) 00:09:21.577 14358.342 - 14417.920: 98.5427% ( 9) 00:09:21.577 14417.920 - 14477.498: 98.5994% ( 7) 00:09:21.577 14477.498 - 14537.076: 98.6480% ( 6) 00:09:21.577 14537.076 - 14596.655: 98.7047% ( 7) 00:09:21.577 14596.655 - 14656.233: 98.7370% ( 4) 00:09:21.577 14656.233 - 14715.811: 98.7775% ( 5) 00:09:21.577 14715.811 - 14775.389: 98.8180% ( 5) 00:09:21.577 14775.389 - 14834.967: 98.8585% ( 5) 00:09:21.577 14834.967 - 14894.545: 98.8828% ( 3) 00:09:21.577 14894.545 - 14954.124: 98.8909% ( 1) 00:09:21.577 14954.124 - 15013.702: 98.9152% ( 3) 00:09:21.577 15013.702 - 15073.280: 98.9313% ( 2) 00:09:21.577 15073.280 - 15132.858: 98.9637% ( 4) 00:09:21.577 36700.160 - 36938.473: 98.9880% ( 3) 00:09:21.577 36938.473 - 37176.785: 99.0285% ( 5) 00:09:21.577 37176.785 - 37415.098: 99.0852% ( 7) 00:09:21.577 37415.098 - 37653.411: 99.1256% ( 5) 00:09:21.577 37653.411 - 37891.724: 99.1742% ( 6) 00:09:21.577 37891.724 - 38130.036: 99.2228% ( 6) 00:09:21.577 38130.036 - 38368.349: 99.2714% ( 6) 00:09:21.577 38368.349 - 38606.662: 99.3038% ( 4) 00:09:21.577 38606.662 - 38844.975: 99.3442% ( 5) 00:09:21.577 38844.975 - 39083.287: 99.4009% ( 7) 00:09:21.578 39083.287 - 39321.600: 99.4495% ( 6) 00:09:21.578 39321.600 - 39559.913: 99.4819% ( 4) 00:09:21.578 45517.731 - 45756.044: 99.4900% ( 1) 00:09:21.578 45756.044 - 45994.356: 99.5223% ( 4) 00:09:21.578 45994.356 - 46232.669: 99.5628% ( 5) 00:09:21.578 46232.669 - 46470.982: 99.6114% ( 6) 00:09:21.578 46470.982 - 46709.295: 99.6519% ( 5) 00:09:21.578 46709.295 - 46947.607: 99.7005% ( 6) 00:09:21.578 46947.607 - 47185.920: 99.7328% ( 4) 00:09:21.578 47185.920 - 47424.233: 99.7895% ( 7) 00:09:21.578 47424.233 - 47662.545: 99.8300% ( 5) 00:09:21.578 47662.545 - 47900.858: 99.8786% ( 6) 00:09:21.578 47900.858 - 48139.171: 99.9190% ( 5) 00:09:21.578 48139.171 - 48377.484: 99.9595% ( 5) 00:09:21.578 48377.484 - 48615.796: 100.0000% ( 5) 00:09:21.578 00:09:21.578 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:21.578 ============================================================================== 00:09:21.578 Range in us Cumulative IO count 00:09:21.578 8400.524 - 8460.102: 0.0081% ( 1) 00:09:21.578 8460.102 - 8519.680: 0.0972% ( 11) 00:09:21.578 8519.680 - 8579.258: 0.2995% ( 25) 00:09:21.578 8579.258 - 8638.836: 0.6477% ( 43) 00:09:21.578 8638.836 - 8698.415: 1.2144% ( 70) 00:09:21.578 8698.415 - 8757.993: 2.0725% ( 106) 00:09:21.578 8757.993 - 8817.571: 3.1979% ( 139) 00:09:21.578 8817.571 - 8877.149: 4.8575% ( 205) 00:09:21.578 8877.149 - 8936.727: 6.9220% ( 255) 00:09:21.578 8936.727 - 8996.305: 9.3750% ( 303) 00:09:21.578 8996.305 - 9055.884: 12.1762% ( 346) 00:09:21.578 9055.884 - 9115.462: 15.2526% ( 380) 00:09:21.578 9115.462 - 9175.040: 18.3938% ( 388) 00:09:21.578 9175.040 - 9234.618: 21.6240% ( 399) 00:09:21.578 9234.618 - 9294.196: 25.0000% ( 417) 00:09:21.578 9294.196 - 9353.775: 28.5946% ( 444) 00:09:21.578 9353.775 - 9413.353: 32.2215% ( 448) 00:09:21.578 9413.353 - 9472.931: 35.9618% ( 462) 00:09:21.578 9472.931 - 9532.509: 39.5887% ( 448) 00:09:21.578 9532.509 - 9592.087: 43.0861% ( 432) 00:09:21.578 9592.087 - 9651.665: 46.4054% ( 410) 00:09:21.578 9651.665 - 9711.244: 49.4900% ( 381) 00:09:21.578 9711.244 - 9770.822: 52.1697% ( 331) 00:09:21.578 9770.822 - 9830.400: 54.5418% ( 293) 00:09:21.578 9830.400 - 9889.978: 56.4848% ( 240) 00:09:21.578 9889.978 - 9949.556: 58.2092% ( 213) 00:09:21.578 9949.556 - 10009.135: 59.7231% ( 187) 00:09:21.578 10009.135 - 10068.713: 61.1885% ( 181) 00:09:21.578 10068.713 - 10128.291: 62.5729% ( 171) 00:09:21.578 10128.291 - 10187.869: 64.1192% ( 191) 00:09:21.578 10187.869 - 10247.447: 65.6250% ( 186) 00:09:21.578 10247.447 - 10307.025: 67.1551% ( 189) 00:09:21.578 10307.025 - 10366.604: 68.6286% ( 182) 00:09:21.578 10366.604 - 10426.182: 70.1668% ( 190) 00:09:21.578 10426.182 - 10485.760: 71.7536% ( 196) 00:09:21.578 10485.760 - 10545.338: 73.2432% ( 184) 00:09:21.578 10545.338 - 10604.916: 74.7166% ( 182) 00:09:21.578 10604.916 - 10664.495: 76.2872% ( 194) 00:09:21.578 10664.495 - 10724.073: 77.8093% ( 188) 00:09:21.578 10724.073 - 10783.651: 79.3070% ( 185) 00:09:21.578 10783.651 - 10843.229: 80.9100% ( 198) 00:09:21.578 10843.229 - 10902.807: 82.3025% ( 172) 00:09:21.578 10902.807 - 10962.385: 83.7111% ( 174) 00:09:21.578 10962.385 - 11021.964: 85.0389% ( 164) 00:09:21.578 11021.964 - 11081.542: 86.1561% ( 138) 00:09:21.578 11081.542 - 11141.120: 87.1600% ( 124) 00:09:21.578 11141.120 - 11200.698: 87.9777% ( 101) 00:09:21.578 11200.698 - 11260.276: 88.6091% ( 78) 00:09:21.578 11260.276 - 11319.855: 89.1758% ( 70) 00:09:21.578 11319.855 - 11379.433: 89.7102% ( 66) 00:09:21.578 11379.433 - 11439.011: 90.1635% ( 56) 00:09:21.578 11439.011 - 11498.589: 90.5521% ( 48) 00:09:21.578 11498.589 - 11558.167: 90.9326% ( 47) 00:09:21.578 11558.167 - 11617.745: 91.2565% ( 40) 00:09:21.578 11617.745 - 11677.324: 91.5965% ( 42) 00:09:21.578 11677.324 - 11736.902: 91.9284% ( 41) 00:09:21.578 11736.902 - 11796.480: 92.2604% ( 41) 00:09:21.578 11796.480 - 11856.058: 92.5923% ( 41) 00:09:21.578 11856.058 - 11915.636: 92.8918% ( 37) 00:09:21.578 11915.636 - 11975.215: 93.1671% ( 34) 00:09:21.578 11975.215 - 12034.793: 93.4100% ( 30) 00:09:21.578 12034.793 - 12094.371: 93.6448% ( 29) 00:09:21.578 12094.371 - 12153.949: 93.8714% ( 28) 00:09:21.578 12153.949 - 12213.527: 94.1062% ( 29) 00:09:21.578 12213.527 - 12273.105: 94.3491% ( 30) 00:09:21.578 12273.105 - 12332.684: 94.5596% ( 26) 00:09:21.578 12332.684 - 12392.262: 94.7782% ( 27) 00:09:21.578 12392.262 - 12451.840: 94.9644% ( 23) 00:09:21.578 12451.840 - 12511.418: 95.0939% ( 16) 00:09:21.578 12511.418 - 12570.996: 95.2234% ( 16) 00:09:21.578 12570.996 - 12630.575: 95.3368% ( 14) 00:09:21.578 12630.575 - 12690.153: 95.4177% ( 10) 00:09:21.578 12690.153 - 12749.731: 95.5311% ( 14) 00:09:21.578 12749.731 - 12809.309: 95.6120% ( 10) 00:09:21.578 12809.309 - 12868.887: 95.7173% ( 13) 00:09:21.578 12868.887 - 12928.465: 95.7983% ( 10) 00:09:21.578 12928.465 - 12988.044: 95.8387% ( 5) 00:09:21.578 12988.044 - 13047.622: 95.8873% ( 6) 00:09:21.578 13047.622 - 13107.200: 95.9683% ( 10) 00:09:21.578 13107.200 - 13166.778: 96.0654% ( 12) 00:09:21.578 13166.778 - 13226.356: 96.1869% ( 15) 00:09:21.578 13226.356 - 13285.935: 96.2921% ( 13) 00:09:21.578 13285.935 - 13345.513: 96.4054% ( 14) 00:09:21.578 13345.513 - 13405.091: 96.5350% ( 16) 00:09:21.578 13405.091 - 13464.669: 96.6402% ( 13) 00:09:21.578 13464.669 - 13524.247: 96.7455% ( 13) 00:09:21.578 13524.247 - 13583.825: 96.8912% ( 18) 00:09:21.578 13583.825 - 13643.404: 97.0045% ( 14) 00:09:21.578 13643.404 - 13702.982: 97.1179% ( 14) 00:09:21.578 13702.982 - 13762.560: 97.2312% ( 14) 00:09:21.578 13762.560 - 13822.138: 97.3608% ( 16) 00:09:21.578 13822.138 - 13881.716: 97.4822% ( 15) 00:09:21.578 13881.716 - 13941.295: 97.6198% ( 17) 00:09:21.578 13941.295 - 14000.873: 97.7413% ( 15) 00:09:21.578 14000.873 - 14060.451: 97.8789% ( 17) 00:09:21.578 14060.451 - 14120.029: 98.0084% ( 16) 00:09:21.578 14120.029 - 14179.607: 98.1380% ( 16) 00:09:21.578 14179.607 - 14239.185: 98.2594% ( 15) 00:09:21.578 14239.185 - 14298.764: 98.3646% ( 13) 00:09:21.578 14298.764 - 14358.342: 98.4456% ( 10) 00:09:21.578 14358.342 - 14417.920: 98.4942% ( 6) 00:09:21.578 14417.920 - 14477.498: 98.5589% ( 8) 00:09:21.578 14477.498 - 14537.076: 98.6075% ( 6) 00:09:21.578 14537.076 - 14596.655: 98.6561% ( 6) 00:09:21.578 14596.655 - 14656.233: 98.7128% ( 7) 00:09:21.578 14656.233 - 14715.811: 98.7613% ( 6) 00:09:21.578 14715.811 - 14775.389: 98.8099% ( 6) 00:09:21.578 14775.389 - 14834.967: 98.8504% ( 5) 00:09:21.578 14834.967 - 14894.545: 98.8990% ( 6) 00:09:21.578 14894.545 - 14954.124: 98.9394% ( 5) 00:09:21.578 14954.124 - 15013.702: 98.9637% ( 3) 00:09:21.578 34555.345 - 34793.658: 98.9799% ( 2) 00:09:21.578 34793.658 - 35031.971: 99.0204% ( 5) 00:09:21.578 35031.971 - 35270.284: 99.0609% ( 5) 00:09:21.578 35270.284 - 35508.596: 99.1095% ( 6) 00:09:21.578 35508.596 - 35746.909: 99.1580% ( 6) 00:09:21.578 35746.909 - 35985.222: 99.2066% ( 6) 00:09:21.578 35985.222 - 36223.535: 99.2471% ( 5) 00:09:21.578 36223.535 - 36461.847: 99.2957% ( 6) 00:09:21.578 36461.847 - 36700.160: 99.3442% ( 6) 00:09:21.578 36700.160 - 36938.473: 99.3928% ( 6) 00:09:21.578 36938.473 - 37176.785: 99.4414% ( 6) 00:09:21.578 37176.785 - 37415.098: 99.4819% ( 5) 00:09:21.578 43134.604 - 43372.916: 99.5142% ( 4) 00:09:21.578 43372.916 - 43611.229: 99.5628% ( 6) 00:09:21.578 43611.229 - 43849.542: 99.6114% ( 6) 00:09:21.578 43849.542 - 44087.855: 99.6519% ( 5) 00:09:21.578 44087.855 - 44326.167: 99.7005% ( 6) 00:09:21.578 44326.167 - 44564.480: 99.7490% ( 6) 00:09:21.578 44564.480 - 44802.793: 99.7976% ( 6) 00:09:21.578 44802.793 - 45041.105: 99.8462% ( 6) 00:09:21.578 45041.105 - 45279.418: 99.8948% ( 6) 00:09:21.578 45279.418 - 45517.731: 99.9433% ( 6) 00:09:21.578 45517.731 - 45756.044: 99.9919% ( 6) 00:09:21.578 45756.044 - 45994.356: 100.0000% ( 1) 00:09:21.578 00:09:21.578 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:21.578 ============================================================================== 00:09:21.578 Range in us Cumulative IO count 00:09:21.578 8340.945 - 8400.524: 0.0405% ( 5) 00:09:21.578 8400.524 - 8460.102: 0.1214% ( 10) 00:09:21.578 8460.102 - 8519.680: 0.2753% ( 19) 00:09:21.578 8519.680 - 8579.258: 0.4938% ( 27) 00:09:21.578 8579.258 - 8638.836: 0.7853% ( 36) 00:09:21.578 8638.836 - 8698.415: 1.2953% ( 63) 00:09:21.578 8698.415 - 8757.993: 2.1859% ( 110) 00:09:21.578 8757.993 - 8817.571: 3.4731% ( 159) 00:09:21.578 8817.571 - 8877.149: 5.1975% ( 213) 00:09:21.578 8877.149 - 8936.727: 7.2215% ( 250) 00:09:21.578 8936.727 - 8996.305: 9.6826% ( 304) 00:09:21.578 8996.305 - 9055.884: 12.3786% ( 333) 00:09:21.578 9055.884 - 9115.462: 15.3416% ( 366) 00:09:21.578 9115.462 - 9175.040: 18.5962% ( 402) 00:09:21.578 9175.040 - 9234.618: 21.9155% ( 410) 00:09:21.578 9234.618 - 9294.196: 25.4372% ( 435) 00:09:21.578 9294.196 - 9353.775: 28.9103% ( 429) 00:09:21.578 9353.775 - 9413.353: 32.4806% ( 441) 00:09:21.578 9413.353 - 9472.931: 35.9618% ( 430) 00:09:21.578 9472.931 - 9532.509: 39.5078% ( 438) 00:09:21.578 9532.509 - 9592.087: 43.1104% ( 445) 00:09:21.578 9592.087 - 9651.665: 46.5350% ( 423) 00:09:21.578 9651.665 - 9711.244: 49.7085% ( 392) 00:09:21.578 9711.244 - 9770.822: 52.4530% ( 339) 00:09:21.578 9770.822 - 9830.400: 54.7927% ( 289) 00:09:21.578 9830.400 - 9889.978: 56.8005% ( 248) 00:09:21.578 9889.978 - 9949.556: 58.6464% ( 228) 00:09:21.578 9949.556 - 10009.135: 60.3789% ( 214) 00:09:21.578 10009.135 - 10068.713: 61.9900% ( 199) 00:09:21.578 10068.713 - 10128.291: 63.4958% ( 186) 00:09:21.579 10128.291 - 10187.869: 64.8640% ( 169) 00:09:21.579 10187.869 - 10247.447: 66.2727% ( 174) 00:09:21.579 10247.447 - 10307.025: 67.6894% ( 175) 00:09:21.579 10307.025 - 10366.604: 69.1791% ( 184) 00:09:21.579 10366.604 - 10426.182: 70.5230% ( 166) 00:09:21.579 10426.182 - 10485.760: 71.9236% ( 173) 00:09:21.579 10485.760 - 10545.338: 73.3484% ( 176) 00:09:21.579 10545.338 - 10604.916: 74.9109% ( 193) 00:09:21.579 10604.916 - 10664.495: 76.3925% ( 183) 00:09:21.579 10664.495 - 10724.073: 77.8902% ( 185) 00:09:21.579 10724.073 - 10783.651: 79.3151% ( 176) 00:09:21.579 10783.651 - 10843.229: 80.7723% ( 180) 00:09:21.579 10843.229 - 10902.807: 82.1163% ( 166) 00:09:21.579 10902.807 - 10962.385: 83.4197% ( 161) 00:09:21.579 10962.385 - 11021.964: 84.6422% ( 151) 00:09:21.579 11021.964 - 11081.542: 85.8080% ( 144) 00:09:21.579 11081.542 - 11141.120: 86.9252% ( 138) 00:09:21.579 11141.120 - 11200.698: 87.7753% ( 105) 00:09:21.579 11200.698 - 11260.276: 88.4958% ( 89) 00:09:21.579 11260.276 - 11319.855: 89.2001% ( 87) 00:09:21.579 11319.855 - 11379.433: 89.8478% ( 80) 00:09:21.579 11379.433 - 11439.011: 90.4145% ( 70) 00:09:21.579 11439.011 - 11498.589: 90.8355% ( 52) 00:09:21.579 11498.589 - 11558.167: 91.1917% ( 44) 00:09:21.579 11558.167 - 11617.745: 91.5722% ( 47) 00:09:21.579 11617.745 - 11677.324: 91.9527% ( 47) 00:09:21.579 11677.324 - 11736.902: 92.3170% ( 45) 00:09:21.579 11736.902 - 11796.480: 92.6328% ( 39) 00:09:21.579 11796.480 - 11856.058: 92.9161% ( 35) 00:09:21.579 11856.058 - 11915.636: 93.1347% ( 27) 00:09:21.579 11915.636 - 11975.215: 93.3452% ( 26) 00:09:21.579 11975.215 - 12034.793: 93.5395% ( 24) 00:09:21.579 12034.793 - 12094.371: 93.7500% ( 26) 00:09:21.579 12094.371 - 12153.949: 93.9929% ( 30) 00:09:21.579 12153.949 - 12213.527: 94.2196% ( 28) 00:09:21.579 12213.527 - 12273.105: 94.4381% ( 27) 00:09:21.579 12273.105 - 12332.684: 94.6324% ( 24) 00:09:21.579 12332.684 - 12392.262: 94.8187% ( 23) 00:09:21.579 12392.262 - 12451.840: 94.9644% ( 18) 00:09:21.579 12451.840 - 12511.418: 95.0696% ( 13) 00:09:21.579 12511.418 - 12570.996: 95.1749% ( 13) 00:09:21.579 12570.996 - 12630.575: 95.2477% ( 9) 00:09:21.579 12630.575 - 12690.153: 95.3206% ( 9) 00:09:21.579 12690.153 - 12749.731: 95.3935% ( 9) 00:09:21.579 12749.731 - 12809.309: 95.4582% ( 8) 00:09:21.579 12809.309 - 12868.887: 95.5311% ( 9) 00:09:21.579 12868.887 - 12928.465: 95.6120% ( 10) 00:09:21.579 12928.465 - 12988.044: 95.6930% ( 10) 00:09:21.579 12988.044 - 13047.622: 95.7821% ( 11) 00:09:21.579 13047.622 - 13107.200: 95.8792% ( 12) 00:09:21.579 13107.200 - 13166.778: 96.0006% ( 15) 00:09:21.579 13166.778 - 13226.356: 96.0735% ( 9) 00:09:21.579 13226.356 - 13285.935: 96.2111% ( 17) 00:09:21.579 13285.935 - 13345.513: 96.3164% ( 13) 00:09:21.579 13345.513 - 13405.091: 96.4540% ( 17) 00:09:21.579 13405.091 - 13464.669: 96.5674% ( 14) 00:09:21.579 13464.669 - 13524.247: 96.6969% ( 16) 00:09:21.579 13524.247 - 13583.825: 96.8264% ( 16) 00:09:21.579 13583.825 - 13643.404: 96.9560% ( 16) 00:09:21.579 13643.404 - 13702.982: 97.0855% ( 16) 00:09:21.579 13702.982 - 13762.560: 97.2231% ( 17) 00:09:21.579 13762.560 - 13822.138: 97.3365% ( 14) 00:09:21.579 13822.138 - 13881.716: 97.4660% ( 16) 00:09:21.579 13881.716 - 13941.295: 97.5874% ( 15) 00:09:21.579 13941.295 - 14000.873: 97.6846% ( 12) 00:09:21.579 14000.873 - 14060.451: 97.7898% ( 13) 00:09:21.579 14060.451 - 14120.029: 97.9032% ( 14) 00:09:21.579 14120.029 - 14179.607: 98.0246% ( 15) 00:09:21.579 14179.607 - 14239.185: 98.1703% ( 18) 00:09:21.579 14239.185 - 14298.764: 98.2837% ( 14) 00:09:21.579 14298.764 - 14358.342: 98.3889% ( 13) 00:09:21.579 14358.342 - 14417.920: 98.4699% ( 10) 00:09:21.579 14417.920 - 14477.498: 98.5508% ( 10) 00:09:21.579 14477.498 - 14537.076: 98.5994% ( 6) 00:09:21.579 14537.076 - 14596.655: 98.6480% ( 6) 00:09:21.579 14596.655 - 14656.233: 98.6966% ( 6) 00:09:21.579 14656.233 - 14715.811: 98.7128% ( 2) 00:09:21.579 14715.811 - 14775.389: 98.7451% ( 4) 00:09:21.579 14775.389 - 14834.967: 98.7694% ( 3) 00:09:21.579 14834.967 - 14894.545: 98.8018% ( 4) 00:09:21.579 14894.545 - 14954.124: 98.8261% ( 3) 00:09:21.579 14954.124 - 15013.702: 98.8504% ( 3) 00:09:21.579 15013.702 - 15073.280: 98.8747% ( 3) 00:09:21.579 15073.280 - 15132.858: 98.8990% ( 3) 00:09:21.579 15132.858 - 15192.436: 98.9152% ( 2) 00:09:21.579 15192.436 - 15252.015: 98.9394% ( 3) 00:09:21.579 15252.015 - 15371.171: 98.9637% ( 3) 00:09:21.579 32172.218 - 32410.531: 98.9880% ( 3) 00:09:21.579 32410.531 - 32648.844: 99.0285% ( 5) 00:09:21.579 32648.844 - 32887.156: 99.0771% ( 6) 00:09:21.579 32887.156 - 33125.469: 99.1176% ( 5) 00:09:21.579 33125.469 - 33363.782: 99.1580% ( 5) 00:09:21.579 33363.782 - 33602.095: 99.2066% ( 6) 00:09:21.579 33602.095 - 33840.407: 99.2552% ( 6) 00:09:21.579 33840.407 - 34078.720: 99.3038% ( 6) 00:09:21.579 34078.720 - 34317.033: 99.3523% ( 6) 00:09:21.579 34317.033 - 34555.345: 99.4009% ( 6) 00:09:21.579 34555.345 - 34793.658: 99.4414% ( 5) 00:09:21.579 34793.658 - 35031.971: 99.4819% ( 5) 00:09:21.579 40989.789 - 41228.102: 99.5223% ( 5) 00:09:21.579 41228.102 - 41466.415: 99.5709% ( 6) 00:09:21.579 41466.415 - 41704.727: 99.6114% ( 5) 00:09:21.579 41704.727 - 41943.040: 99.6600% ( 6) 00:09:21.579 41943.040 - 42181.353: 99.7005% ( 5) 00:09:21.579 42181.353 - 42419.665: 99.7409% ( 5) 00:09:21.579 42419.665 - 42657.978: 99.7895% ( 6) 00:09:21.579 42657.978 - 42896.291: 99.8381% ( 6) 00:09:21.579 42896.291 - 43134.604: 99.8867% ( 6) 00:09:21.579 43134.604 - 43372.916: 99.9352% ( 6) 00:09:21.579 43372.916 - 43611.229: 99.9838% ( 6) 00:09:21.579 43611.229 - 43849.542: 100.0000% ( 2) 00:09:21.579 00:09:21.579 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:21.579 ============================================================================== 00:09:21.579 Range in us Cumulative IO count 00:09:21.579 8400.524 - 8460.102: 0.0486% ( 6) 00:09:21.579 8460.102 - 8519.680: 0.1295% ( 10) 00:09:21.579 8519.680 - 8579.258: 0.3076% ( 22) 00:09:21.579 8579.258 - 8638.836: 0.5505% ( 30) 00:09:21.579 8638.836 - 8698.415: 1.0606% ( 63) 00:09:21.579 8698.415 - 8757.993: 1.8863% ( 102) 00:09:21.579 8757.993 - 8817.571: 3.2302% ( 166) 00:09:21.579 8817.571 - 8877.149: 4.9466% ( 212) 00:09:21.579 8877.149 - 8936.727: 7.0353% ( 258) 00:09:21.579 8936.727 - 8996.305: 9.4883% ( 303) 00:09:21.579 8996.305 - 9055.884: 12.2490% ( 341) 00:09:21.579 9055.884 - 9115.462: 15.0826% ( 350) 00:09:21.579 9115.462 - 9175.040: 18.1752% ( 382) 00:09:21.579 9175.040 - 9234.618: 21.6564% ( 430) 00:09:21.579 9234.618 - 9294.196: 25.2348% ( 442) 00:09:21.579 9294.196 - 9353.775: 28.8131% ( 442) 00:09:21.579 9353.775 - 9413.353: 32.6020% ( 468) 00:09:21.579 9413.353 - 9472.931: 36.3990% ( 469) 00:09:21.579 9472.931 - 9532.509: 40.0421% ( 450) 00:09:21.579 9532.509 - 9592.087: 43.5962% ( 439) 00:09:21.579 9592.087 - 9651.665: 47.0045% ( 421) 00:09:21.579 9651.665 - 9711.244: 50.2348% ( 399) 00:09:21.579 9711.244 - 9770.822: 52.7607% ( 312) 00:09:21.579 9770.822 - 9830.400: 54.9790% ( 274) 00:09:21.579 9830.400 - 9889.978: 56.9543% ( 244) 00:09:21.579 9889.978 - 9949.556: 58.6545% ( 210) 00:09:21.579 9949.556 - 10009.135: 60.2655% ( 199) 00:09:21.579 10009.135 - 10068.713: 61.7633% ( 185) 00:09:21.579 10068.713 - 10128.291: 63.2772% ( 187) 00:09:21.579 10128.291 - 10187.869: 64.6697% ( 172) 00:09:21.579 10187.869 - 10247.447: 66.0541% ( 171) 00:09:21.579 10247.447 - 10307.025: 67.4790% ( 176) 00:09:21.579 10307.025 - 10366.604: 68.8795% ( 173) 00:09:21.579 10366.604 - 10426.182: 70.3125% ( 177) 00:09:21.579 10426.182 - 10485.760: 71.7212% ( 174) 00:09:21.579 10485.760 - 10545.338: 73.2918% ( 194) 00:09:21.579 10545.338 - 10604.916: 74.8543% ( 193) 00:09:21.579 10604.916 - 10664.495: 76.3763% ( 188) 00:09:21.579 10664.495 - 10724.073: 77.9226% ( 191) 00:09:21.579 10724.073 - 10783.651: 79.4203% ( 185) 00:09:21.579 10783.651 - 10843.229: 80.9100% ( 184) 00:09:21.579 10843.229 - 10902.807: 82.3753% ( 181) 00:09:21.579 10902.807 - 10962.385: 83.6949% ( 163) 00:09:21.579 10962.385 - 11021.964: 85.0227% ( 164) 00:09:21.579 11021.964 - 11081.542: 86.1885% ( 144) 00:09:21.579 11081.542 - 11141.120: 87.2571% ( 132) 00:09:21.579 11141.120 - 11200.698: 88.0262% ( 95) 00:09:21.579 11200.698 - 11260.276: 88.7063% ( 84) 00:09:21.579 11260.276 - 11319.855: 89.2892% ( 72) 00:09:21.579 11319.855 - 11379.433: 89.7749% ( 60) 00:09:21.579 11379.433 - 11439.011: 90.1878% ( 51) 00:09:21.579 11439.011 - 11498.589: 90.6412% ( 56) 00:09:21.579 11498.589 - 11558.167: 91.0541% ( 51) 00:09:21.579 11558.167 - 11617.745: 91.5722% ( 64) 00:09:21.579 11617.745 - 11677.324: 92.0094% ( 54) 00:09:21.579 11677.324 - 11736.902: 92.3575% ( 43) 00:09:21.579 11736.902 - 11796.480: 92.6975% ( 42) 00:09:21.579 11796.480 - 11856.058: 92.9890% ( 36) 00:09:21.579 11856.058 - 11915.636: 93.2400% ( 31) 00:09:21.579 11915.636 - 11975.215: 93.4747% ( 29) 00:09:21.579 11975.215 - 12034.793: 93.6852% ( 26) 00:09:21.579 12034.793 - 12094.371: 93.8795% ( 24) 00:09:21.579 12094.371 - 12153.949: 94.0900% ( 26) 00:09:21.579 12153.949 - 12213.527: 94.2600% ( 21) 00:09:21.579 12213.527 - 12273.105: 94.4462% ( 23) 00:09:21.579 12273.105 - 12332.684: 94.6082% ( 20) 00:09:21.579 12332.684 - 12392.262: 94.7701% ( 20) 00:09:21.579 12392.262 - 12451.840: 94.8996% ( 16) 00:09:21.579 12451.840 - 12511.418: 95.0372% ( 17) 00:09:21.579 12511.418 - 12570.996: 95.1506% ( 14) 00:09:21.579 12570.996 - 12630.575: 95.2477% ( 12) 00:09:21.579 12630.575 - 12690.153: 95.3044% ( 7) 00:09:21.579 12690.153 - 12749.731: 95.3773% ( 9) 00:09:21.579 12749.731 - 12809.309: 95.4582% ( 10) 00:09:21.579 12809.309 - 12868.887: 95.5230% ( 8) 00:09:21.579 12868.887 - 12928.465: 95.6120% ( 11) 00:09:21.579 12928.465 - 12988.044: 95.7254% ( 14) 00:09:21.580 12988.044 - 13047.622: 95.8306% ( 13) 00:09:21.580 13047.622 - 13107.200: 95.9197% ( 11) 00:09:21.580 13107.200 - 13166.778: 96.0573% ( 17) 00:09:21.580 13166.778 - 13226.356: 96.2192% ( 20) 00:09:21.580 13226.356 - 13285.935: 96.3731% ( 19) 00:09:21.580 13285.935 - 13345.513: 96.5269% ( 19) 00:09:21.580 13345.513 - 13405.091: 96.6564% ( 16) 00:09:21.580 13405.091 - 13464.669: 96.8102% ( 19) 00:09:21.580 13464.669 - 13524.247: 96.9560% ( 18) 00:09:21.580 13524.247 - 13583.825: 97.1260% ( 21) 00:09:21.580 13583.825 - 13643.404: 97.2636% ( 17) 00:09:21.580 13643.404 - 13702.982: 97.4174% ( 19) 00:09:21.580 13702.982 - 13762.560: 97.5551% ( 17) 00:09:21.580 13762.560 - 13822.138: 97.6765% ( 15) 00:09:21.580 13822.138 - 13881.716: 97.8141% ( 17) 00:09:21.580 13881.716 - 13941.295: 97.9356% ( 15) 00:09:21.580 13941.295 - 14000.873: 98.0489% ( 14) 00:09:21.580 14000.873 - 14060.451: 98.1541% ( 13) 00:09:21.580 14060.451 - 14120.029: 98.2918% ( 17) 00:09:21.580 14120.029 - 14179.607: 98.3808% ( 11) 00:09:21.580 14179.607 - 14239.185: 98.4699% ( 11) 00:09:21.580 14239.185 - 14298.764: 98.5508% ( 10) 00:09:21.580 14298.764 - 14358.342: 98.5994% ( 6) 00:09:21.580 14358.342 - 14417.920: 98.6237% ( 3) 00:09:21.580 14417.920 - 14477.498: 98.6561% ( 4) 00:09:21.580 14477.498 - 14537.076: 98.6804% ( 3) 00:09:21.580 14537.076 - 14596.655: 98.7047% ( 3) 00:09:21.580 14596.655 - 14656.233: 98.7290% ( 3) 00:09:21.580 14656.233 - 14715.811: 98.7613% ( 4) 00:09:21.580 14715.811 - 14775.389: 98.7856% ( 3) 00:09:21.580 14775.389 - 14834.967: 98.8099% ( 3) 00:09:21.580 14834.967 - 14894.545: 98.8423% ( 4) 00:09:21.580 14894.545 - 14954.124: 98.8666% ( 3) 00:09:21.580 14954.124 - 15013.702: 98.8909% ( 3) 00:09:21.580 15013.702 - 15073.280: 98.9071% ( 2) 00:09:21.580 15073.280 - 15132.858: 98.9233% ( 2) 00:09:21.580 15132.858 - 15192.436: 98.9394% ( 2) 00:09:21.580 15192.436 - 15252.015: 98.9637% ( 3) 00:09:21.580 29908.247 - 30027.404: 98.9718% ( 1) 00:09:21.580 30027.404 - 30146.560: 98.9961% ( 3) 00:09:21.580 30146.560 - 30265.716: 99.0123% ( 2) 00:09:21.580 30265.716 - 30384.873: 99.0366% ( 3) 00:09:21.580 30384.873 - 30504.029: 99.0609% ( 3) 00:09:21.580 30504.029 - 30742.342: 99.1014% ( 5) 00:09:21.580 30742.342 - 30980.655: 99.1499% ( 6) 00:09:21.580 30980.655 - 31218.967: 99.1985% ( 6) 00:09:21.580 31218.967 - 31457.280: 99.2471% ( 6) 00:09:21.580 31457.280 - 31695.593: 99.2957% ( 6) 00:09:21.580 31695.593 - 31933.905: 99.3361% ( 5) 00:09:21.580 31933.905 - 32172.218: 99.3847% ( 6) 00:09:21.580 32172.218 - 32410.531: 99.4252% ( 5) 00:09:21.580 32410.531 - 32648.844: 99.4738% ( 6) 00:09:21.580 32648.844 - 32887.156: 99.4819% ( 1) 00:09:21.580 38130.036 - 38368.349: 99.5062% ( 3) 00:09:21.580 38368.349 - 38606.662: 99.5547% ( 6) 00:09:21.580 38606.662 - 38844.975: 99.5952% ( 5) 00:09:21.580 38844.975 - 39083.287: 99.6438% ( 6) 00:09:21.580 39083.287 - 39321.600: 99.6924% ( 6) 00:09:21.580 39321.600 - 39559.913: 99.7409% ( 6) 00:09:21.580 39559.913 - 39798.225: 99.7895% ( 6) 00:09:21.580 39798.225 - 40036.538: 99.8300% ( 5) 00:09:21.580 40036.538 - 40274.851: 99.8786% ( 6) 00:09:21.580 40274.851 - 40513.164: 99.9271% ( 6) 00:09:21.580 40513.164 - 40751.476: 99.9757% ( 6) 00:09:21.580 40751.476 - 40989.789: 100.0000% ( 3) 00:09:21.580 00:09:21.580 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:21.580 ============================================================================== 00:09:21.580 Range in us Cumulative IO count 00:09:21.580 8340.945 - 8400.524: 0.0162% ( 2) 00:09:21.580 8400.524 - 8460.102: 0.0648% ( 6) 00:09:21.580 8460.102 - 8519.680: 0.1700% ( 13) 00:09:21.580 8519.680 - 8579.258: 0.3481% ( 22) 00:09:21.580 8579.258 - 8638.836: 0.5748% ( 28) 00:09:21.580 8638.836 - 8698.415: 1.0039% ( 53) 00:09:21.580 8698.415 - 8757.993: 1.8135% ( 100) 00:09:21.580 8757.993 - 8817.571: 3.0602% ( 154) 00:09:21.580 8817.571 - 8877.149: 4.8332% ( 219) 00:09:21.580 8877.149 - 8936.727: 7.0434% ( 273) 00:09:21.580 8936.727 - 8996.305: 9.3588% ( 286) 00:09:21.580 8996.305 - 9055.884: 11.9981% ( 326) 00:09:21.580 9055.884 - 9115.462: 14.9611% ( 366) 00:09:21.580 9115.462 - 9175.040: 18.0457% ( 381) 00:09:21.580 9175.040 - 9234.618: 21.3326% ( 406) 00:09:21.580 9234.618 - 9294.196: 25.0000% ( 453) 00:09:21.580 9294.196 - 9353.775: 28.7808% ( 467) 00:09:21.580 9353.775 - 9413.353: 32.4968% ( 459) 00:09:21.580 9413.353 - 9472.931: 36.2451% ( 463) 00:09:21.580 9472.931 - 9532.509: 39.9126% ( 453) 00:09:21.580 9532.509 - 9592.087: 43.5962% ( 455) 00:09:21.580 9592.087 - 9651.665: 47.0531% ( 427) 00:09:21.580 9651.665 - 9711.244: 50.1700% ( 385) 00:09:21.580 9711.244 - 9770.822: 52.9145% ( 339) 00:09:21.580 9770.822 - 9830.400: 55.1409% ( 275) 00:09:21.580 9830.400 - 9889.978: 57.0596% ( 237) 00:09:21.580 9889.978 - 9949.556: 58.7840% ( 213) 00:09:21.580 9949.556 - 10009.135: 60.4437% ( 205) 00:09:21.580 10009.135 - 10068.713: 61.9495% ( 186) 00:09:21.580 10068.713 - 10128.291: 63.5039% ( 192) 00:09:21.580 10128.291 - 10187.869: 64.9288% ( 176) 00:09:21.580 10187.869 - 10247.447: 66.4103% ( 183) 00:09:21.580 10247.447 - 10307.025: 67.8918% ( 183) 00:09:21.580 10307.025 - 10366.604: 69.3167% ( 176) 00:09:21.580 10366.604 - 10426.182: 70.7902% ( 182) 00:09:21.580 10426.182 - 10485.760: 72.2960% ( 186) 00:09:21.580 10485.760 - 10545.338: 73.8018% ( 186) 00:09:21.580 10545.338 - 10604.916: 75.2915% ( 184) 00:09:21.580 10604.916 - 10664.495: 76.7568% ( 181) 00:09:21.580 10664.495 - 10724.073: 78.2545% ( 185) 00:09:21.580 10724.073 - 10783.651: 79.8170% ( 193) 00:09:21.580 10783.651 - 10843.229: 81.2581% ( 178) 00:09:21.580 10843.229 - 10902.807: 82.7396% ( 183) 00:09:21.580 10902.807 - 10962.385: 84.0431% ( 161) 00:09:21.580 10962.385 - 11021.964: 85.2736% ( 152) 00:09:21.580 11021.964 - 11081.542: 86.3990% ( 139) 00:09:21.580 11081.542 - 11141.120: 87.3138% ( 113) 00:09:21.580 11141.120 - 11200.698: 88.1315% ( 101) 00:09:21.580 11200.698 - 11260.276: 88.7144% ( 72) 00:09:21.580 11260.276 - 11319.855: 89.1839% ( 58) 00:09:21.580 11319.855 - 11379.433: 89.6049% ( 52) 00:09:21.580 11379.433 - 11439.011: 90.0340% ( 53) 00:09:21.580 11439.011 - 11498.589: 90.4550% ( 52) 00:09:21.580 11498.589 - 11558.167: 90.8841% ( 53) 00:09:21.580 11558.167 - 11617.745: 91.3617% ( 59) 00:09:21.580 11617.745 - 11677.324: 91.7584% ( 49) 00:09:21.580 11677.324 - 11736.902: 92.1227% ( 45) 00:09:21.580 11736.902 - 11796.480: 92.5032% ( 47) 00:09:21.580 11796.480 - 11856.058: 92.7947% ( 36) 00:09:21.580 11856.058 - 11915.636: 93.0619% ( 33) 00:09:21.580 11915.636 - 11975.215: 93.2804% ( 27) 00:09:21.580 11975.215 - 12034.793: 93.5233% ( 30) 00:09:21.580 12034.793 - 12094.371: 93.7500% ( 28) 00:09:21.580 12094.371 - 12153.949: 93.9362% ( 23) 00:09:21.580 12153.949 - 12213.527: 94.1386% ( 25) 00:09:21.580 12213.527 - 12273.105: 94.3410% ( 25) 00:09:21.580 12273.105 - 12332.684: 94.4867% ( 18) 00:09:21.580 12332.684 - 12392.262: 94.6405% ( 19) 00:09:21.580 12392.262 - 12451.840: 94.7782% ( 17) 00:09:21.580 12451.840 - 12511.418: 94.9158% ( 17) 00:09:21.580 12511.418 - 12570.996: 94.9968% ( 10) 00:09:21.580 12570.996 - 12630.575: 95.1344% ( 17) 00:09:21.580 12630.575 - 12690.153: 95.2396% ( 13) 00:09:21.580 12690.153 - 12749.731: 95.3854% ( 18) 00:09:21.580 12749.731 - 12809.309: 95.4906% ( 13) 00:09:21.580 12809.309 - 12868.887: 95.5878% ( 12) 00:09:21.580 12868.887 - 12928.465: 95.7011% ( 14) 00:09:21.580 12928.465 - 12988.044: 95.7821% ( 10) 00:09:21.580 12988.044 - 13047.622: 95.8630% ( 10) 00:09:21.580 13047.622 - 13107.200: 95.9602% ( 12) 00:09:21.580 13107.200 - 13166.778: 96.0897% ( 16) 00:09:21.580 13166.778 - 13226.356: 96.2273% ( 17) 00:09:21.580 13226.356 - 13285.935: 96.3731% ( 18) 00:09:21.580 13285.935 - 13345.513: 96.5431% ( 21) 00:09:21.580 13345.513 - 13405.091: 96.7212% ( 22) 00:09:21.580 13405.091 - 13464.669: 96.8669% ( 18) 00:09:21.580 13464.669 - 13524.247: 97.0450% ( 22) 00:09:21.580 13524.247 - 13583.825: 97.1826% ( 17) 00:09:21.580 13583.825 - 13643.404: 97.3365% ( 19) 00:09:21.580 13643.404 - 13702.982: 97.4741% ( 17) 00:09:21.580 13702.982 - 13762.560: 97.6117% ( 17) 00:09:21.580 13762.560 - 13822.138: 97.7332% ( 15) 00:09:21.580 13822.138 - 13881.716: 97.8303% ( 12) 00:09:21.580 13881.716 - 13941.295: 97.9275% ( 12) 00:09:21.580 13941.295 - 14000.873: 98.0813% ( 19) 00:09:21.580 14000.873 - 14060.451: 98.1946% ( 14) 00:09:21.580 14060.451 - 14120.029: 98.2918% ( 12) 00:09:21.580 14120.029 - 14179.607: 98.4213% ( 16) 00:09:21.580 14179.607 - 14239.185: 98.5185% ( 12) 00:09:21.580 14239.185 - 14298.764: 98.5994% ( 10) 00:09:21.580 14298.764 - 14358.342: 98.6399% ( 5) 00:09:21.580 14358.342 - 14417.920: 98.6642% ( 3) 00:09:21.580 14417.920 - 14477.498: 98.6885% ( 3) 00:09:21.580 14477.498 - 14537.076: 98.7128% ( 3) 00:09:21.580 14537.076 - 14596.655: 98.7370% ( 3) 00:09:21.580 14596.655 - 14656.233: 98.7613% ( 3) 00:09:21.580 14656.233 - 14715.811: 98.7856% ( 3) 00:09:21.580 14715.811 - 14775.389: 98.8099% ( 3) 00:09:21.580 14775.389 - 14834.967: 98.8342% ( 3) 00:09:21.580 14834.967 - 14894.545: 98.8666% ( 4) 00:09:21.580 14894.545 - 14954.124: 98.8909% ( 3) 00:09:21.580 14954.124 - 15013.702: 98.9152% ( 3) 00:09:21.580 15013.702 - 15073.280: 98.9394% ( 3) 00:09:21.580 15073.280 - 15132.858: 98.9637% ( 3) 00:09:21.580 27167.651 - 27286.807: 98.9718% ( 1) 00:09:21.580 27286.807 - 27405.964: 99.0042% ( 4) 00:09:21.580 27405.964 - 27525.120: 99.0204% ( 2) 00:09:21.580 27525.120 - 27644.276: 99.0366% ( 2) 00:09:21.581 27644.276 - 27763.433: 99.0690% ( 4) 00:09:21.581 27763.433 - 27882.589: 99.0933% ( 3) 00:09:21.581 27882.589 - 28001.745: 99.1176% ( 3) 00:09:21.581 28001.745 - 28120.902: 99.1418% ( 3) 00:09:21.581 28120.902 - 28240.058: 99.1580% ( 2) 00:09:21.581 28240.058 - 28359.215: 99.1742% ( 2) 00:09:21.581 28359.215 - 28478.371: 99.1985% ( 3) 00:09:21.581 28478.371 - 28597.527: 99.2228% ( 3) 00:09:21.581 28597.527 - 28716.684: 99.2471% ( 3) 00:09:21.581 28716.684 - 28835.840: 99.2714% ( 3) 00:09:21.581 28835.840 - 28954.996: 99.2957% ( 3) 00:09:21.581 28954.996 - 29074.153: 99.3119% ( 2) 00:09:21.581 29074.153 - 29193.309: 99.3361% ( 3) 00:09:21.581 29193.309 - 29312.465: 99.3604% ( 3) 00:09:21.581 29312.465 - 29431.622: 99.3847% ( 3) 00:09:21.581 29431.622 - 29550.778: 99.4090% ( 3) 00:09:21.581 29550.778 - 29669.935: 99.4252% ( 2) 00:09:21.581 29669.935 - 29789.091: 99.4495% ( 3) 00:09:21.581 29789.091 - 29908.247: 99.4738% ( 3) 00:09:21.581 29908.247 - 30027.404: 99.4819% ( 1) 00:09:21.581 35270.284 - 35508.596: 99.4981% ( 2) 00:09:21.581 35508.596 - 35746.909: 99.5385% ( 5) 00:09:21.581 35746.909 - 35985.222: 99.5790% ( 5) 00:09:21.581 35985.222 - 36223.535: 99.6195% ( 5) 00:09:21.581 36223.535 - 36461.847: 99.6681% ( 6) 00:09:21.581 36461.847 - 36700.160: 99.7166% ( 6) 00:09:21.581 36700.160 - 36938.473: 99.7571% ( 5) 00:09:21.581 36938.473 - 37176.785: 99.8057% ( 6) 00:09:21.581 37176.785 - 37415.098: 99.8543% ( 6) 00:09:21.581 37415.098 - 37653.411: 99.8948% ( 5) 00:09:21.581 37653.411 - 37891.724: 99.9433% ( 6) 00:09:21.581 37891.724 - 38130.036: 99.9919% ( 6) 00:09:21.581 38130.036 - 38368.349: 100.0000% ( 1) 00:09:21.581 00:09:21.581 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:21.581 ============================================================================== 00:09:21.581 Range in us Cumulative IO count 00:09:21.581 8340.945 - 8400.524: 0.0081% ( 1) 00:09:21.581 8400.524 - 8460.102: 0.0405% ( 4) 00:09:21.581 8460.102 - 8519.680: 0.1376% ( 12) 00:09:21.581 8519.680 - 8579.258: 0.3481% ( 26) 00:09:21.581 8579.258 - 8638.836: 0.6234% ( 34) 00:09:21.581 8638.836 - 8698.415: 0.9796% ( 44) 00:09:21.581 8698.415 - 8757.993: 1.7973% ( 101) 00:09:21.581 8757.993 - 8817.571: 3.0036% ( 149) 00:09:21.581 8817.571 - 8877.149: 4.7037% ( 210) 00:09:21.581 8877.149 - 8936.727: 6.8572% ( 266) 00:09:21.581 8936.727 - 8996.305: 9.3102% ( 303) 00:09:21.581 8996.305 - 9055.884: 12.0466% ( 338) 00:09:21.581 9055.884 - 9115.462: 14.9530% ( 359) 00:09:21.581 9115.462 - 9175.040: 18.1266% ( 392) 00:09:21.581 9175.040 - 9234.618: 21.6078% ( 430) 00:09:21.581 9234.618 - 9294.196: 25.2753% ( 453) 00:09:21.581 9294.196 - 9353.775: 28.9265% ( 451) 00:09:21.581 9353.775 - 9413.353: 32.6101% ( 455) 00:09:21.581 9413.353 - 9472.931: 36.3666% ( 464) 00:09:21.581 9472.931 - 9532.509: 39.9530% ( 443) 00:09:21.581 9532.509 - 9592.087: 43.4019% ( 426) 00:09:21.581 9592.087 - 9651.665: 46.8993% ( 432) 00:09:21.581 9651.665 - 9711.244: 50.0000% ( 383) 00:09:21.581 9711.244 - 9770.822: 52.7769% ( 343) 00:09:21.581 9770.822 - 9830.400: 55.2218% ( 302) 00:09:21.581 9830.400 - 9889.978: 57.3025% ( 257) 00:09:21.581 9889.978 - 9949.556: 59.1078% ( 223) 00:09:21.581 9949.556 - 10009.135: 60.6218% ( 187) 00:09:21.581 10009.135 - 10068.713: 62.0709% ( 179) 00:09:21.581 10068.713 - 10128.291: 63.5282% ( 180) 00:09:21.581 10128.291 - 10187.869: 64.9207% ( 172) 00:09:21.581 10187.869 - 10247.447: 66.2727% ( 167) 00:09:21.581 10247.447 - 10307.025: 67.6733% ( 173) 00:09:21.581 10307.025 - 10366.604: 69.1467% ( 182) 00:09:21.581 10366.604 - 10426.182: 70.6444% ( 185) 00:09:21.581 10426.182 - 10485.760: 72.2555% ( 199) 00:09:21.581 10485.760 - 10545.338: 73.7370% ( 183) 00:09:21.581 10545.338 - 10604.916: 75.3076% ( 194) 00:09:21.581 10604.916 - 10664.495: 76.8459% ( 190) 00:09:21.581 10664.495 - 10724.073: 78.3112% ( 181) 00:09:21.581 10724.073 - 10783.651: 79.7847% ( 182) 00:09:21.581 10783.651 - 10843.229: 81.1852% ( 173) 00:09:21.581 10843.229 - 10902.807: 82.5130% ( 164) 00:09:21.581 10902.807 - 10962.385: 83.8164% ( 161) 00:09:21.581 10962.385 - 11021.964: 85.2170% ( 173) 00:09:21.581 11021.964 - 11081.542: 86.2775% ( 131) 00:09:21.581 11081.542 - 11141.120: 87.2085% ( 115) 00:09:21.581 11141.120 - 11200.698: 88.0262% ( 101) 00:09:21.581 11200.698 - 11260.276: 88.7549% ( 90) 00:09:21.581 11260.276 - 11319.855: 89.3135% ( 69) 00:09:21.581 11319.855 - 11379.433: 89.8559% ( 67) 00:09:21.581 11379.433 - 11439.011: 90.3497% ( 61) 00:09:21.581 11439.011 - 11498.589: 90.7950% ( 55) 00:09:21.581 11498.589 - 11558.167: 91.1674% ( 46) 00:09:21.581 11558.167 - 11617.745: 91.5074% ( 42) 00:09:21.581 11617.745 - 11677.324: 91.8637% ( 44) 00:09:21.581 11677.324 - 11736.902: 92.1875% ( 40) 00:09:21.581 11736.902 - 11796.480: 92.4385% ( 31) 00:09:21.581 11796.480 - 11856.058: 92.6894% ( 31) 00:09:21.581 11856.058 - 11915.636: 92.9323% ( 30) 00:09:21.581 11915.636 - 11975.215: 93.1509% ( 27) 00:09:21.581 11975.215 - 12034.793: 93.4019% ( 31) 00:09:21.581 12034.793 - 12094.371: 93.6286% ( 28) 00:09:21.581 12094.371 - 12153.949: 93.8633% ( 29) 00:09:21.581 12153.949 - 12213.527: 94.0981% ( 29) 00:09:21.581 12213.527 - 12273.105: 94.3005% ( 25) 00:09:21.581 12273.105 - 12332.684: 94.4867% ( 23) 00:09:21.581 12332.684 - 12392.262: 94.6648% ( 22) 00:09:21.581 12392.262 - 12451.840: 94.8591% ( 24) 00:09:21.581 12451.840 - 12511.418: 95.0534% ( 24) 00:09:21.581 12511.418 - 12570.996: 95.1992% ( 18) 00:09:21.581 12570.996 - 12630.575: 95.3125% ( 14) 00:09:21.581 12630.575 - 12690.153: 95.4339% ( 15) 00:09:21.581 12690.153 - 12749.731: 95.5554% ( 15) 00:09:21.581 12749.731 - 12809.309: 95.6606% ( 13) 00:09:21.582 12809.309 - 12868.887: 95.7659% ( 13) 00:09:21.582 12868.887 - 12928.465: 95.8630% ( 12) 00:09:21.582 12928.465 - 12988.044: 95.9440% ( 10) 00:09:21.582 12988.044 - 13047.622: 96.0249% ( 10) 00:09:21.582 13047.622 - 13107.200: 96.1464% ( 15) 00:09:21.582 13107.200 - 13166.778: 96.2921% ( 18) 00:09:21.582 13166.778 - 13226.356: 96.4297% ( 17) 00:09:21.582 13226.356 - 13285.935: 96.5512% ( 15) 00:09:21.582 13285.935 - 13345.513: 96.6645% ( 14) 00:09:21.582 13345.513 - 13405.091: 96.7940% ( 16) 00:09:21.582 13405.091 - 13464.669: 96.9236% ( 16) 00:09:21.582 13464.669 - 13524.247: 97.0531% ( 16) 00:09:21.582 13524.247 - 13583.825: 97.1665% ( 14) 00:09:21.582 13583.825 - 13643.404: 97.2717% ( 13) 00:09:21.582 13643.404 - 13702.982: 97.3688% ( 12) 00:09:21.582 13702.982 - 13762.560: 97.4822% ( 14) 00:09:21.582 13762.560 - 13822.138: 97.6036% ( 15) 00:09:21.582 13822.138 - 13881.716: 97.7413% ( 17) 00:09:21.582 13881.716 - 13941.295: 97.8546% ( 14) 00:09:21.582 13941.295 - 14000.873: 98.0084% ( 19) 00:09:21.582 14000.873 - 14060.451: 98.1380% ( 16) 00:09:21.582 14060.451 - 14120.029: 98.2837% ( 18) 00:09:21.582 14120.029 - 14179.607: 98.4051% ( 15) 00:09:21.582 14179.607 - 14239.185: 98.5185% ( 14) 00:09:21.582 14239.185 - 14298.764: 98.5994% ( 10) 00:09:21.582 14298.764 - 14358.342: 98.6723% ( 9) 00:09:21.582 14358.342 - 14417.920: 98.7209% ( 6) 00:09:21.582 14417.920 - 14477.498: 98.7532% ( 4) 00:09:21.582 14477.498 - 14537.076: 98.7775% ( 3) 00:09:21.582 14537.076 - 14596.655: 98.8018% ( 3) 00:09:21.582 14596.655 - 14656.233: 98.8261% ( 3) 00:09:21.582 14656.233 - 14715.811: 98.8423% ( 2) 00:09:21.582 14715.811 - 14775.389: 98.8747% ( 4) 00:09:21.582 14775.389 - 14834.967: 98.8909% ( 2) 00:09:21.582 14834.967 - 14894.545: 98.9152% ( 3) 00:09:21.582 14894.545 - 14954.124: 98.9313% ( 2) 00:09:21.582 14954.124 - 15013.702: 98.9556% ( 3) 00:09:21.582 15013.702 - 15073.280: 98.9637% ( 1) 00:09:21.582 24307.898 - 24427.055: 98.9799% ( 2) 00:09:21.582 24427.055 - 24546.211: 98.9961% ( 2) 00:09:21.582 24546.211 - 24665.367: 99.0123% ( 2) 00:09:21.582 24665.367 - 24784.524: 99.0447% ( 4) 00:09:21.582 24784.524 - 24903.680: 99.0690% ( 3) 00:09:21.582 24903.680 - 25022.836: 99.0852% ( 2) 00:09:21.582 25022.836 - 25141.993: 99.1176% ( 4) 00:09:21.582 25141.993 - 25261.149: 99.1418% ( 3) 00:09:21.582 25261.149 - 25380.305: 99.1580% ( 2) 00:09:21.582 25380.305 - 25499.462: 99.1823% ( 3) 00:09:21.582 25499.462 - 25618.618: 99.1985% ( 2) 00:09:21.582 25618.618 - 25737.775: 99.2228% ( 3) 00:09:21.582 25737.775 - 25856.931: 99.2471% ( 3) 00:09:21.582 25856.931 - 25976.087: 99.2714% ( 3) 00:09:21.582 25976.087 - 26095.244: 99.2876% ( 2) 00:09:21.582 26095.244 - 26214.400: 99.3119% ( 3) 00:09:21.582 26214.400 - 26333.556: 99.3361% ( 3) 00:09:21.582 26333.556 - 26452.713: 99.3604% ( 3) 00:09:21.582 26452.713 - 26571.869: 99.3766% ( 2) 00:09:21.582 26571.869 - 26691.025: 99.4009% ( 3) 00:09:21.582 26691.025 - 26810.182: 99.4252% ( 3) 00:09:21.582 26810.182 - 26929.338: 99.4414% ( 2) 00:09:21.582 26929.338 - 27048.495: 99.4657% ( 3) 00:09:21.582 27048.495 - 27167.651: 99.4819% ( 2) 00:09:21.582 32410.531 - 32648.844: 99.4981% ( 2) 00:09:21.582 32648.844 - 32887.156: 99.5385% ( 5) 00:09:21.582 32887.156 - 33125.469: 99.5871% ( 6) 00:09:21.582 33125.469 - 33363.782: 99.6357% ( 6) 00:09:21.582 33363.782 - 33602.095: 99.6843% ( 6) 00:09:21.582 33602.095 - 33840.407: 99.7247% ( 5) 00:09:21.582 33840.407 - 34078.720: 99.7733% ( 6) 00:09:21.582 34078.720 - 34317.033: 99.8219% ( 6) 00:09:21.582 34317.033 - 34555.345: 99.8624% ( 5) 00:09:21.582 34555.345 - 34793.658: 99.9109% ( 6) 00:09:21.582 34793.658 - 35031.971: 99.9514% ( 5) 00:09:21.582 35031.971 - 35270.284: 100.0000% ( 6) 00:09:21.582 00:09:21.582 11:21:04 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:22.956 Initializing NVMe Controllers 00:09:22.956 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:22.956 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:22.956 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:22.956 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:22.956 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:22.956 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:22.956 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:22.956 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:22.956 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:22.956 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:22.956 Initialization complete. Launching workers. 00:09:22.956 ======================================================== 00:09:22.956 Latency(us) 00:09:22.956 Device Information : IOPS MiB/s Average min max 00:09:22.956 PCIE (0000:00:10.0) NSID 1 from core 0: 11553.59 135.39 11110.27 8530.23 37753.28 00:09:22.956 PCIE (0000:00:11.0) NSID 1 from core 0: 11553.59 135.39 11093.53 8709.99 35500.48 00:09:22.956 PCIE (0000:00:13.0) NSID 1 from core 0: 11553.59 135.39 11074.51 8756.68 34090.28 00:09:22.956 PCIE (0000:00:12.0) NSID 1 from core 0: 11553.59 135.39 11056.03 8614.72 31896.66 00:09:22.956 PCIE (0000:00:12.0) NSID 2 from core 0: 11553.59 135.39 11038.35 8675.30 29774.31 00:09:22.956 PCIE (0000:00:12.0) NSID 3 from core 0: 11553.59 135.39 11020.32 8541.11 27702.01 00:09:22.956 ======================================================== 00:09:22.956 Total : 69321.55 812.36 11065.50 8530.23 37753.28 00:09:22.956 00:09:22.956 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:22.956 ================================================================================= 00:09:22.956 1.00000% : 8996.305us 00:09:22.956 10.00000% : 9711.244us 00:09:22.956 25.00000% : 10128.291us 00:09:22.956 50.00000% : 10783.651us 00:09:22.956 75.00000% : 11736.902us 00:09:22.956 90.00000% : 12392.262us 00:09:22.956 95.00000% : 12749.731us 00:09:22.956 98.00000% : 13166.778us 00:09:22.956 99.00000% : 27882.589us 00:09:22.956 99.50000% : 35746.909us 00:09:22.956 99.90000% : 37415.098us 00:09:22.956 99.99000% : 37653.411us 00:09:22.956 99.99900% : 37891.724us 00:09:22.956 99.99990% : 37891.724us 00:09:22.956 99.99999% : 37891.724us 00:09:22.956 00:09:22.956 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:22.956 ================================================================================= 00:09:22.957 1.00000% : 9115.462us 00:09:22.957 10.00000% : 9830.400us 00:09:22.957 25.00000% : 10187.869us 00:09:22.957 50.00000% : 10664.495us 00:09:22.957 75.00000% : 11736.902us 00:09:22.957 90.00000% : 12273.105us 00:09:22.957 95.00000% : 12630.575us 00:09:22.957 98.00000% : 12988.044us 00:09:22.957 99.00000% : 27167.651us 00:09:22.957 99.50000% : 33840.407us 00:09:22.957 99.90000% : 35270.284us 00:09:22.957 99.99000% : 35508.596us 00:09:22.957 99.99900% : 35508.596us 00:09:22.957 99.99990% : 35508.596us 00:09:22.957 99.99999% : 35508.596us 00:09:22.957 00:09:22.957 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:22.957 ================================================================================= 00:09:22.957 1.00000% : 9115.462us 00:09:22.957 10.00000% : 9770.822us 00:09:22.957 25.00000% : 10187.869us 00:09:22.957 50.00000% : 10664.495us 00:09:22.957 75.00000% : 11736.902us 00:09:22.957 90.00000% : 12332.684us 00:09:22.957 95.00000% : 12630.575us 00:09:22.957 98.00000% : 13047.622us 00:09:22.957 99.00000% : 25618.618us 00:09:22.957 99.50000% : 32410.531us 00:09:22.957 99.90000% : 33840.407us 00:09:22.957 99.99000% : 34078.720us 00:09:22.957 99.99900% : 34317.033us 00:09:22.957 99.99990% : 34317.033us 00:09:22.957 99.99999% : 34317.033us 00:09:22.957 00:09:22.957 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:22.957 ================================================================================= 00:09:22.957 1.00000% : 9115.462us 00:09:22.957 10.00000% : 9770.822us 00:09:22.957 25.00000% : 10187.869us 00:09:22.957 50.00000% : 10724.073us 00:09:22.957 75.00000% : 11736.902us 00:09:22.957 90.00000% : 12332.684us 00:09:22.957 95.00000% : 12630.575us 00:09:22.957 98.00000% : 13166.778us 00:09:22.957 99.00000% : 23592.960us 00:09:22.957 99.50000% : 30146.560us 00:09:22.957 99.90000% : 31695.593us 00:09:22.957 99.99000% : 31933.905us 00:09:22.957 99.99900% : 31933.905us 00:09:22.957 99.99990% : 31933.905us 00:09:22.957 99.99999% : 31933.905us 00:09:22.957 00:09:22.957 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:22.957 ================================================================================= 00:09:22.957 1.00000% : 9115.462us 00:09:22.957 10.00000% : 9770.822us 00:09:22.957 25.00000% : 10187.869us 00:09:22.957 50.00000% : 10724.073us 00:09:22.957 75.00000% : 11736.902us 00:09:22.957 90.00000% : 12332.684us 00:09:22.957 95.00000% : 12630.575us 00:09:22.957 98.00000% : 13047.622us 00:09:22.957 99.00000% : 21924.771us 00:09:22.957 99.50000% : 26691.025us 00:09:22.957 99.90000% : 29550.778us 00:09:22.957 99.99000% : 29789.091us 00:09:22.957 99.99900% : 29789.091us 00:09:22.957 99.99990% : 29789.091us 00:09:22.957 99.99999% : 29789.091us 00:09:22.957 00:09:22.957 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:22.957 ================================================================================= 00:09:22.957 1.00000% : 9115.462us 00:09:22.957 10.00000% : 9770.822us 00:09:22.957 25.00000% : 10187.869us 00:09:22.957 50.00000% : 10664.495us 00:09:22.957 75.00000% : 11736.902us 00:09:22.957 90.00000% : 12332.684us 00:09:22.957 95.00000% : 12630.575us 00:09:22.957 98.00000% : 13047.622us 00:09:22.957 99.00000% : 19660.800us 00:09:22.957 99.50000% : 24665.367us 00:09:22.957 99.90000% : 27405.964us 00:09:22.957 99.99000% : 27763.433us 00:09:22.957 99.99900% : 27763.433us 00:09:22.957 99.99990% : 27763.433us 00:09:22.957 99.99999% : 27763.433us 00:09:22.957 00:09:22.957 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:22.957 ============================================================================== 00:09:22.957 Range in us Cumulative IO count 00:09:22.957 8519.680 - 8579.258: 0.0086% ( 1) 00:09:22.957 8638.836 - 8698.415: 0.0345% ( 3) 00:09:22.957 8698.415 - 8757.993: 0.3194% ( 33) 00:09:22.957 8757.993 - 8817.571: 0.5007% ( 21) 00:09:22.957 8817.571 - 8877.149: 0.7510% ( 29) 00:09:22.957 8877.149 - 8936.727: 0.8287% ( 9) 00:09:22.957 8936.727 - 8996.305: 1.0877% ( 30) 00:09:22.957 8996.305 - 9055.884: 1.3812% ( 34) 00:09:22.957 9055.884 - 9115.462: 1.6661% ( 33) 00:09:22.957 9115.462 - 9175.040: 1.9596% ( 34) 00:09:22.957 9175.040 - 9234.618: 2.2186% ( 30) 00:09:22.957 9234.618 - 9294.196: 2.9696% ( 87) 00:09:22.957 9294.196 - 9353.775: 3.6430% ( 78) 00:09:22.957 9353.775 - 9413.353: 4.5580% ( 106) 00:09:22.957 9413.353 - 9472.931: 5.4126% ( 99) 00:09:22.957 9472.931 - 9532.509: 6.4744% ( 123) 00:09:22.957 9532.509 - 9592.087: 7.5104% ( 120) 00:09:22.957 9592.087 - 9651.665: 9.1765% ( 193) 00:09:22.957 9651.665 - 9711.244: 10.7907% ( 187) 00:09:22.957 9711.244 - 9770.822: 12.6640% ( 217) 00:09:22.957 9770.822 - 9830.400: 14.5373% ( 217) 00:09:22.957 9830.400 - 9889.978: 16.9976% ( 285) 00:09:22.957 9889.978 - 9949.556: 19.6564% ( 308) 00:09:22.957 9949.556 - 10009.135: 22.1944% ( 294) 00:09:22.957 10009.135 - 10068.713: 24.9827% ( 323) 00:09:22.957 10068.713 - 10128.291: 27.7365% ( 319) 00:09:22.957 10128.291 - 10187.869: 30.2314% ( 289) 00:09:22.957 10187.869 - 10247.447: 32.5622% ( 270) 00:09:22.957 10247.447 - 10307.025: 35.2728% ( 314) 00:09:22.957 10307.025 - 10366.604: 37.5000% ( 258) 00:09:22.957 10366.604 - 10426.182: 39.7358% ( 259) 00:09:22.957 10426.182 - 10485.760: 41.7300% ( 231) 00:09:22.957 10485.760 - 10545.338: 43.6464% ( 222) 00:09:22.957 10545.338 - 10604.916: 45.5801% ( 224) 00:09:22.957 10604.916 - 10664.495: 47.6519% ( 240) 00:09:22.957 10664.495 - 10724.073: 49.6115% ( 227) 00:09:22.957 10724.073 - 10783.651: 51.6834% ( 240) 00:09:22.957 10783.651 - 10843.229: 53.4444% ( 204) 00:09:22.957 10843.229 - 10902.807: 55.1191% ( 194) 00:09:22.957 10902.807 - 10962.385: 56.7507% ( 189) 00:09:22.957 10962.385 - 11021.964: 58.0974% ( 156) 00:09:22.957 11021.964 - 11081.542: 59.4009% ( 151) 00:09:22.957 11081.542 - 11141.120: 60.4972% ( 127) 00:09:22.957 11141.120 - 11200.698: 62.1202% ( 188) 00:09:22.957 11200.698 - 11260.276: 63.8985% ( 206) 00:09:22.957 11260.276 - 11319.855: 65.6595% ( 204) 00:09:22.957 11319.855 - 11379.433: 67.4119% ( 203) 00:09:22.957 11379.433 - 11439.011: 69.2162% ( 209) 00:09:22.957 11439.011 - 11498.589: 70.7873% ( 182) 00:09:22.957 11498.589 - 11558.167: 71.9182% ( 131) 00:09:22.957 11558.167 - 11617.745: 72.9627% ( 121) 00:09:22.957 11617.745 - 11677.324: 74.1540% ( 138) 00:09:22.957 11677.324 - 11736.902: 75.5180% ( 158) 00:09:22.957 11736.902 - 11796.480: 77.5294% ( 233) 00:09:22.957 11796.480 - 11856.058: 79.1695% ( 190) 00:09:22.957 11856.058 - 11915.636: 80.3436% ( 136) 00:09:22.957 11915.636 - 11975.215: 81.8370% ( 173) 00:09:22.957 11975.215 - 12034.793: 83.1837% ( 156) 00:09:22.957 12034.793 - 12094.371: 84.3232% ( 132) 00:09:22.957 12094.371 - 12153.949: 85.8080% ( 172) 00:09:22.957 12153.949 - 12213.527: 87.0425% ( 143) 00:09:22.957 12213.527 - 12273.105: 88.2769% ( 143) 00:09:22.957 12273.105 - 12332.684: 89.5028% ( 142) 00:09:22.957 12332.684 - 12392.262: 90.7372% ( 143) 00:09:22.957 12392.262 - 12451.840: 91.8249% ( 126) 00:09:22.957 12451.840 - 12511.418: 92.7659% ( 109) 00:09:22.957 12511.418 - 12570.996: 93.5169% ( 87) 00:09:22.957 12570.996 - 12630.575: 94.0953% ( 67) 00:09:22.957 12630.575 - 12690.153: 94.7945% ( 81) 00:09:22.957 12690.153 - 12749.731: 95.3988% ( 70) 00:09:22.957 12749.731 - 12809.309: 95.9340% ( 62) 00:09:22.957 12809.309 - 12868.887: 96.3743% ( 51) 00:09:22.957 12868.887 - 12928.465: 96.8232% ( 52) 00:09:22.957 12928.465 - 12988.044: 97.1685% ( 40) 00:09:22.957 12988.044 - 13047.622: 97.4879% ( 37) 00:09:22.957 13047.622 - 13107.200: 97.7124% ( 26) 00:09:22.957 13107.200 - 13166.778: 98.0059% ( 34) 00:09:22.957 13166.778 - 13226.356: 98.2562% ( 29) 00:09:22.957 13226.356 - 13285.935: 98.4289% ( 20) 00:09:22.957 13285.935 - 13345.513: 98.5929% ( 19) 00:09:22.958 13345.513 - 13405.091: 98.6706% ( 9) 00:09:22.958 13405.091 - 13464.669: 98.7569% ( 10) 00:09:22.958 13464.669 - 13524.247: 98.8087% ( 6) 00:09:22.958 13524.247 - 13583.825: 98.8432% ( 4) 00:09:22.958 13583.825 - 13643.404: 98.8778% ( 4) 00:09:22.958 13643.404 - 13702.982: 98.8950% ( 2) 00:09:22.958 27286.807 - 27405.964: 98.9037% ( 1) 00:09:22.958 27405.964 - 27525.120: 98.9296% ( 3) 00:09:22.958 27525.120 - 27644.276: 98.9641% ( 4) 00:09:22.958 27644.276 - 27763.433: 98.9986% ( 4) 00:09:22.958 27763.433 - 27882.589: 99.0418% ( 5) 00:09:22.958 27882.589 - 28001.745: 99.0504% ( 1) 00:09:22.958 28001.745 - 28120.902: 99.0849% ( 4) 00:09:22.958 28120.902 - 28240.058: 99.1281% ( 5) 00:09:22.958 28240.058 - 28359.215: 99.1540% ( 3) 00:09:22.958 28359.215 - 28478.371: 99.1799% ( 3) 00:09:22.958 28478.371 - 28597.527: 99.2144% ( 4) 00:09:22.958 28597.527 - 28716.684: 99.2576% ( 5) 00:09:22.958 28716.684 - 28835.840: 99.2835% ( 3) 00:09:22.958 28835.840 - 28954.996: 99.3008% ( 2) 00:09:22.958 28954.996 - 29074.153: 99.3439% ( 5) 00:09:22.958 29074.153 - 29193.309: 99.3785% ( 4) 00:09:22.958 29193.309 - 29312.465: 99.4044% ( 3) 00:09:22.958 29312.465 - 29431.622: 99.4302% ( 3) 00:09:22.958 29431.622 - 29550.778: 99.4475% ( 2) 00:09:22.958 35270.284 - 35508.596: 99.4648% ( 2) 00:09:22.958 35508.596 - 35746.909: 99.5252% ( 7) 00:09:22.958 35746.909 - 35985.222: 99.5856% ( 7) 00:09:22.958 35985.222 - 36223.535: 99.6374% ( 6) 00:09:22.958 36223.535 - 36461.847: 99.6979% ( 7) 00:09:22.958 36461.847 - 36700.160: 99.7497% ( 6) 00:09:22.958 36700.160 - 36938.473: 99.8101% ( 7) 00:09:22.958 36938.473 - 37176.785: 99.8705% ( 7) 00:09:22.958 37176.785 - 37415.098: 99.9309% ( 7) 00:09:22.958 37415.098 - 37653.411: 99.9914% ( 7) 00:09:22.958 37653.411 - 37891.724: 100.0000% ( 1) 00:09:22.958 00:09:22.958 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:22.958 ============================================================================== 00:09:22.958 Range in us Cumulative IO count 00:09:22.958 8698.415 - 8757.993: 0.0604% ( 7) 00:09:22.958 8757.993 - 8817.571: 0.1209% ( 7) 00:09:22.958 8817.571 - 8877.149: 0.2244% ( 12) 00:09:22.958 8877.149 - 8936.727: 0.3021% ( 9) 00:09:22.958 8936.727 - 8996.305: 0.5611% ( 30) 00:09:22.958 8996.305 - 9055.884: 0.7683% ( 24) 00:09:22.958 9055.884 - 9115.462: 1.0791% ( 36) 00:09:22.958 9115.462 - 9175.040: 1.3812% ( 35) 00:09:22.958 9175.040 - 9234.618: 1.8905% ( 59) 00:09:22.958 9234.618 - 9294.196: 2.2963% ( 47) 00:09:22.958 9294.196 - 9353.775: 2.8574% ( 65) 00:09:22.958 9353.775 - 9413.353: 3.6343% ( 90) 00:09:22.958 9413.353 - 9472.931: 4.4976% ( 100) 00:09:22.958 9472.931 - 9532.509: 5.3867% ( 103) 00:09:22.958 9532.509 - 9592.087: 6.2932% ( 105) 00:09:22.958 9592.087 - 9651.665: 7.0615% ( 89) 00:09:22.958 9651.665 - 9711.244: 8.0628% ( 116) 00:09:22.958 9711.244 - 9770.822: 9.2023% ( 132) 00:09:22.958 9770.822 - 9830.400: 10.5059% ( 151) 00:09:22.958 9830.400 - 9889.978: 12.1461% ( 190) 00:09:22.958 9889.978 - 9949.556: 14.0366% ( 219) 00:09:22.958 9949.556 - 10009.135: 16.5401% ( 290) 00:09:22.958 10009.135 - 10068.713: 19.9413% ( 394) 00:09:22.958 10068.713 - 10128.291: 23.3166% ( 391) 00:09:22.958 10128.291 - 10187.869: 26.6920% ( 391) 00:09:22.958 10187.869 - 10247.447: 30.2831% ( 416) 00:09:22.958 10247.447 - 10307.025: 33.9693% ( 427) 00:09:22.958 10307.025 - 10366.604: 37.1461% ( 368) 00:09:22.958 10366.604 - 10426.182: 40.1157% ( 344) 00:09:22.958 10426.182 - 10485.760: 43.0767% ( 343) 00:09:22.958 10485.760 - 10545.338: 46.0031% ( 339) 00:09:22.958 10545.338 - 10604.916: 48.3339% ( 270) 00:09:22.958 10604.916 - 10664.495: 50.7165% ( 276) 00:09:22.958 10664.495 - 10724.073: 52.8833% ( 251) 00:09:22.958 10724.073 - 10783.651: 54.9551% ( 240) 00:09:22.958 10783.651 - 10843.229: 56.6816% ( 200) 00:09:22.958 10843.229 - 10902.807: 58.3046% ( 188) 00:09:22.958 10902.807 - 10962.385: 59.9016% ( 185) 00:09:22.958 10962.385 - 11021.964: 61.3605% ( 169) 00:09:22.958 11021.964 - 11081.542: 62.4741% ( 129) 00:09:22.958 11081.542 - 11141.120: 63.5963% ( 130) 00:09:22.958 11141.120 - 11200.698: 64.2179% ( 72) 00:09:22.958 11200.698 - 11260.276: 64.8740% ( 76) 00:09:22.958 11260.276 - 11319.855: 65.6336% ( 88) 00:09:22.958 11319.855 - 11379.433: 66.4451% ( 94) 00:09:22.958 11379.433 - 11439.011: 67.4465% ( 116) 00:09:22.958 11439.011 - 11498.589: 68.7068% ( 146) 00:09:22.958 11498.589 - 11558.167: 70.2003% ( 173) 00:09:22.958 11558.167 - 11617.745: 71.8405% ( 190) 00:09:22.958 11617.745 - 11677.324: 73.7569% ( 222) 00:09:22.958 11677.324 - 11736.902: 75.5784% ( 211) 00:09:22.958 11736.902 - 11796.480: 77.5035% ( 223) 00:09:22.958 11796.480 - 11856.058: 79.2990% ( 208) 00:09:22.958 11856.058 - 11915.636: 80.9565% ( 192) 00:09:22.958 11915.636 - 11975.215: 82.6485% ( 196) 00:09:22.958 11975.215 - 12034.793: 84.1419% ( 173) 00:09:22.958 12034.793 - 12094.371: 85.7044% ( 181) 00:09:22.958 12094.371 - 12153.949: 87.2410% ( 178) 00:09:22.958 12153.949 - 12213.527: 88.6827% ( 167) 00:09:22.958 12213.527 - 12273.105: 90.0725% ( 161) 00:09:22.958 12273.105 - 12332.684: 91.2120% ( 132) 00:09:22.958 12332.684 - 12392.262: 92.3084% ( 127) 00:09:22.958 12392.262 - 12451.840: 93.3356% ( 119) 00:09:22.958 12451.840 - 12511.418: 94.2593% ( 107) 00:09:22.958 12511.418 - 12570.996: 94.9672% ( 82) 00:09:22.958 12570.996 - 12630.575: 95.6923% ( 84) 00:09:22.958 12630.575 - 12690.153: 96.2189% ( 61) 00:09:22.958 12690.153 - 12749.731: 96.7196% ( 58) 00:09:22.958 12749.731 - 12809.309: 97.2117% ( 57) 00:09:22.958 12809.309 - 12868.887: 97.5570% ( 40) 00:09:22.958 12868.887 - 12928.465: 97.8332% ( 32) 00:09:22.958 12928.465 - 12988.044: 98.0749% ( 28) 00:09:22.958 12988.044 - 13047.622: 98.2648% ( 22) 00:09:22.958 13047.622 - 13107.200: 98.3857% ( 14) 00:09:22.958 13107.200 - 13166.778: 98.4979% ( 13) 00:09:22.958 13166.778 - 13226.356: 98.5843% ( 10) 00:09:22.958 13226.356 - 13285.935: 98.6792% ( 11) 00:09:22.958 13285.935 - 13345.513: 98.7396% ( 7) 00:09:22.958 13345.513 - 13405.091: 98.7742% ( 4) 00:09:22.958 13405.091 - 13464.669: 98.8087% ( 4) 00:09:22.958 13464.669 - 13524.247: 98.8432% ( 4) 00:09:22.958 13524.247 - 13583.825: 98.8778% ( 4) 00:09:22.958 13583.825 - 13643.404: 98.8950% ( 2) 00:09:22.958 26691.025 - 26810.182: 98.9296% ( 4) 00:09:22.958 26810.182 - 26929.338: 98.9555% ( 3) 00:09:22.958 26929.338 - 27048.495: 98.9814% ( 3) 00:09:22.958 27048.495 - 27167.651: 99.0073% ( 3) 00:09:22.958 27167.651 - 27286.807: 99.0418% ( 4) 00:09:22.958 27286.807 - 27405.964: 99.0590% ( 2) 00:09:22.958 27405.964 - 27525.120: 99.0849% ( 3) 00:09:22.958 27525.120 - 27644.276: 99.1108% ( 3) 00:09:22.958 27644.276 - 27763.433: 99.1454% ( 4) 00:09:22.958 27763.433 - 27882.589: 99.1799% ( 4) 00:09:22.958 27882.589 - 28001.745: 99.2144% ( 4) 00:09:22.958 28001.745 - 28120.902: 99.2490% ( 4) 00:09:22.958 28120.902 - 28240.058: 99.2835% ( 4) 00:09:22.958 28240.058 - 28359.215: 99.3094% ( 3) 00:09:22.958 28359.215 - 28478.371: 99.3439% ( 4) 00:09:22.958 28478.371 - 28597.527: 99.3698% ( 3) 00:09:22.958 28597.527 - 28716.684: 99.4044% ( 4) 00:09:22.958 28716.684 - 28835.840: 99.4389% ( 4) 00:09:22.958 28835.840 - 28954.996: 99.4475% ( 1) 00:09:22.958 33363.782 - 33602.095: 99.4734% ( 3) 00:09:22.958 33602.095 - 33840.407: 99.5338% ( 7) 00:09:22.958 33840.407 - 34078.720: 99.5943% ( 7) 00:09:22.958 34078.720 - 34317.033: 99.6547% ( 7) 00:09:22.958 34317.033 - 34555.345: 99.7238% ( 8) 00:09:22.958 34555.345 - 34793.658: 99.7928% ( 8) 00:09:22.958 34793.658 - 35031.971: 99.8619% ( 8) 00:09:22.958 35031.971 - 35270.284: 99.9309% ( 8) 00:09:22.958 35270.284 - 35508.596: 100.0000% ( 8) 00:09:22.958 00:09:22.958 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:22.958 ============================================================================== 00:09:22.958 Range in us Cumulative IO count 00:09:22.958 8698.415 - 8757.993: 0.0086% ( 1) 00:09:22.958 8817.571 - 8877.149: 0.0518% ( 5) 00:09:22.958 8877.149 - 8936.727: 0.2244% ( 20) 00:09:22.958 8936.727 - 8996.305: 0.4748% ( 29) 00:09:22.958 8996.305 - 9055.884: 0.7338% ( 30) 00:09:22.958 9055.884 - 9115.462: 1.1568% ( 49) 00:09:22.958 9115.462 - 9175.040: 1.4675% ( 36) 00:09:22.958 9175.040 - 9234.618: 1.9423% ( 55) 00:09:22.958 9234.618 - 9294.196: 2.3653% ( 49) 00:09:22.958 9294.196 - 9353.775: 2.8747% ( 59) 00:09:22.958 9353.775 - 9413.353: 3.6084% ( 85) 00:09:22.958 9413.353 - 9472.931: 4.5925% ( 114) 00:09:22.958 9472.931 - 9532.509: 5.5680% ( 113) 00:09:22.958 9532.509 - 9592.087: 6.6816% ( 129) 00:09:22.958 9592.087 - 9651.665: 7.8557% ( 136) 00:09:22.958 9651.665 - 9711.244: 8.8916% ( 120) 00:09:22.958 9711.244 - 9770.822: 10.4109% ( 176) 00:09:22.958 9770.822 - 9830.400: 12.1461% ( 201) 00:09:22.958 9830.400 - 9889.978: 14.0884% ( 225) 00:09:22.958 9889.978 - 9949.556: 15.8667% ( 206) 00:09:22.959 9949.556 - 10009.135: 17.9731% ( 244) 00:09:22.959 10009.135 - 10068.713: 20.6751% ( 313) 00:09:22.959 10068.713 - 10128.291: 23.7828% ( 360) 00:09:22.959 10128.291 - 10187.869: 26.8992% ( 361) 00:09:22.959 10187.869 - 10247.447: 29.8515% ( 342) 00:09:22.959 10247.447 - 10307.025: 33.2182% ( 390) 00:09:22.959 10307.025 - 10366.604: 36.2828% ( 355) 00:09:22.959 10366.604 - 10426.182: 39.8653% ( 415) 00:09:22.959 10426.182 - 10485.760: 42.6968% ( 328) 00:09:22.959 10485.760 - 10545.338: 45.3988% ( 313) 00:09:22.959 10545.338 - 10604.916: 48.2044% ( 325) 00:09:22.959 10604.916 - 10664.495: 50.7856% ( 299) 00:09:22.959 10664.495 - 10724.073: 52.9523% ( 251) 00:09:22.959 10724.073 - 10783.651: 54.7134% ( 204) 00:09:22.959 10783.651 - 10843.229: 56.4485% ( 201) 00:09:22.959 10843.229 - 10902.807: 58.1233% ( 194) 00:09:22.959 10902.807 - 10962.385: 59.7030% ( 183) 00:09:22.959 10962.385 - 11021.964: 60.8943% ( 138) 00:09:22.959 11021.964 - 11081.542: 61.9130% ( 118) 00:09:22.959 11081.542 - 11141.120: 62.9748% ( 123) 00:09:22.959 11141.120 - 11200.698: 63.9934% ( 118) 00:09:22.959 11200.698 - 11260.276: 64.8394% ( 98) 00:09:22.959 11260.276 - 11319.855: 66.1171% ( 148) 00:09:22.959 11319.855 - 11379.433: 67.3688% ( 145) 00:09:22.959 11379.433 - 11439.011: 68.7500% ( 160) 00:09:22.959 11439.011 - 11498.589: 70.2003% ( 168) 00:09:22.959 11498.589 - 11558.167: 71.6247% ( 165) 00:09:22.959 11558.167 - 11617.745: 73.1181% ( 173) 00:09:22.959 11617.745 - 11677.324: 74.7756% ( 192) 00:09:22.959 11677.324 - 11736.902: 76.2086% ( 166) 00:09:22.959 11736.902 - 11796.480: 77.5898% ( 160) 00:09:22.959 11796.480 - 11856.058: 78.9451% ( 157) 00:09:22.959 11856.058 - 11915.636: 80.3522% ( 163) 00:09:22.959 11915.636 - 11975.215: 81.8974% ( 179) 00:09:22.959 11975.215 - 12034.793: 83.4340% ( 178) 00:09:22.959 12034.793 - 12094.371: 84.9620% ( 177) 00:09:22.959 12094.371 - 12153.949: 86.5331% ( 182) 00:09:22.959 12153.949 - 12213.527: 87.9921% ( 169) 00:09:22.959 12213.527 - 12273.105: 89.3992% ( 163) 00:09:22.959 12273.105 - 12332.684: 90.8235% ( 165) 00:09:22.959 12332.684 - 12392.262: 91.8767% ( 122) 00:09:22.959 12392.262 - 12451.840: 92.9299% ( 122) 00:09:22.959 12451.840 - 12511.418: 93.8536% ( 107) 00:09:22.959 12511.418 - 12570.996: 94.4751% ( 72) 00:09:22.959 12570.996 - 12630.575: 95.3211% ( 98) 00:09:22.959 12630.575 - 12690.153: 96.0376% ( 83) 00:09:22.959 12690.153 - 12749.731: 96.5815% ( 63) 00:09:22.959 12749.731 - 12809.309: 96.9527% ( 43) 00:09:22.959 12809.309 - 12868.887: 97.2721% ( 37) 00:09:22.959 12868.887 - 12928.465: 97.5224% ( 29) 00:09:22.959 12928.465 - 12988.044: 97.8332% ( 36) 00:09:22.959 12988.044 - 13047.622: 98.0663% ( 27) 00:09:22.959 13047.622 - 13107.200: 98.2390% ( 20) 00:09:22.959 13107.200 - 13166.778: 98.3253% ( 10) 00:09:22.959 13166.778 - 13226.356: 98.4202% ( 11) 00:09:22.959 13226.356 - 13285.935: 98.5152% ( 11) 00:09:22.959 13285.935 - 13345.513: 98.6188% ( 12) 00:09:22.959 13345.513 - 13405.091: 98.7310% ( 13) 00:09:22.959 13405.091 - 13464.669: 98.7914% ( 7) 00:09:22.959 13464.669 - 13524.247: 98.8346% ( 5) 00:09:22.959 13524.247 - 13583.825: 98.8519% ( 2) 00:09:22.959 13583.825 - 13643.404: 98.8691% ( 2) 00:09:22.959 13643.404 - 13702.982: 98.8778% ( 1) 00:09:22.959 13702.982 - 13762.560: 98.8950% ( 2) 00:09:22.959 25141.993 - 25261.149: 98.9037% ( 1) 00:09:22.959 25261.149 - 25380.305: 98.9382% ( 4) 00:09:22.959 25380.305 - 25499.462: 98.9727% ( 4) 00:09:22.959 25499.462 - 25618.618: 99.0073% ( 4) 00:09:22.959 25618.618 - 25737.775: 99.0331% ( 3) 00:09:22.959 25737.775 - 25856.931: 99.0677% ( 4) 00:09:22.959 25856.931 - 25976.087: 99.1022% ( 4) 00:09:22.959 25976.087 - 26095.244: 99.1281% ( 3) 00:09:22.959 26095.244 - 26214.400: 99.1540% ( 3) 00:09:22.959 26214.400 - 26333.556: 99.1799% ( 3) 00:09:22.959 26333.556 - 26452.713: 99.2144% ( 4) 00:09:22.959 26452.713 - 26571.869: 99.2403% ( 3) 00:09:22.959 26571.869 - 26691.025: 99.2662% ( 3) 00:09:22.959 26691.025 - 26810.182: 99.3008% ( 4) 00:09:22.959 26810.182 - 26929.338: 99.3353% ( 4) 00:09:22.959 26929.338 - 27048.495: 99.3698% ( 4) 00:09:22.959 27048.495 - 27167.651: 99.3957% ( 3) 00:09:22.959 27167.651 - 27286.807: 99.4302% ( 4) 00:09:22.959 27286.807 - 27405.964: 99.4475% ( 2) 00:09:22.959 31933.905 - 32172.218: 99.4993% ( 6) 00:09:22.959 32172.218 - 32410.531: 99.5684% ( 8) 00:09:22.959 32410.531 - 32648.844: 99.6288% ( 7) 00:09:22.959 32648.844 - 32887.156: 99.6979% ( 8) 00:09:22.959 32887.156 - 33125.469: 99.7583% ( 7) 00:09:22.959 33125.469 - 33363.782: 99.8101% ( 6) 00:09:22.959 33363.782 - 33602.095: 99.8705% ( 7) 00:09:22.959 33602.095 - 33840.407: 99.9396% ( 8) 00:09:22.959 33840.407 - 34078.720: 99.9914% ( 6) 00:09:22.959 34078.720 - 34317.033: 100.0000% ( 1) 00:09:22.959 00:09:22.959 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:22.959 ============================================================================== 00:09:22.959 Range in us Cumulative IO count 00:09:22.959 8579.258 - 8638.836: 0.0086% ( 1) 00:09:22.959 8698.415 - 8757.993: 0.0259% ( 2) 00:09:22.959 8757.993 - 8817.571: 0.0863% ( 7) 00:09:22.959 8817.571 - 8877.149: 0.2158% ( 15) 00:09:22.959 8877.149 - 8936.727: 0.3798% ( 19) 00:09:22.959 8936.727 - 8996.305: 0.5784% ( 23) 00:09:22.959 8996.305 - 9055.884: 0.8287% ( 29) 00:09:22.959 9055.884 - 9115.462: 1.0532% ( 26) 00:09:22.959 9115.462 - 9175.040: 1.4071% ( 41) 00:09:22.959 9175.040 - 9234.618: 1.7093% ( 35) 00:09:22.959 9234.618 - 9294.196: 2.0805% ( 43) 00:09:22.959 9294.196 - 9353.775: 2.5294% ( 52) 00:09:22.959 9353.775 - 9413.353: 3.2372% ( 82) 00:09:22.959 9413.353 - 9472.931: 3.9969% ( 88) 00:09:22.959 9472.931 - 9532.509: 5.1191% ( 130) 00:09:22.959 9532.509 - 9592.087: 6.4917% ( 159) 00:09:22.959 9592.087 - 9651.665: 7.7003% ( 140) 00:09:22.959 9651.665 - 9711.244: 9.3577% ( 192) 00:09:22.959 9711.244 - 9770.822: 10.9720% ( 187) 00:09:22.959 9770.822 - 9830.400: 12.4741% ( 174) 00:09:22.959 9830.400 - 9889.978: 14.3992% ( 223) 00:09:22.959 9889.978 - 9949.556: 16.2120% ( 210) 00:09:22.959 9949.556 - 10009.135: 18.4392% ( 258) 00:09:22.959 10009.135 - 10068.713: 21.2103% ( 321) 00:09:22.959 10068.713 - 10128.291: 24.2576% ( 353) 00:09:22.959 10128.291 - 10187.869: 27.1581% ( 336) 00:09:22.959 10187.869 - 10247.447: 30.2400% ( 357) 00:09:22.959 10247.447 - 10307.025: 33.1233% ( 334) 00:09:22.959 10307.025 - 10366.604: 36.3864% ( 378) 00:09:22.959 10366.604 - 10426.182: 39.4855% ( 359) 00:09:22.959 10426.182 - 10485.760: 41.8767% ( 277) 00:09:22.959 10485.760 - 10545.338: 44.4838% ( 302) 00:09:22.959 10545.338 - 10604.916: 47.1685% ( 311) 00:09:22.959 10604.916 - 10664.495: 49.7324% ( 297) 00:09:22.959 10664.495 - 10724.073: 52.0718% ( 271) 00:09:22.959 10724.073 - 10783.651: 54.0401% ( 228) 00:09:22.959 10783.651 - 10843.229: 55.9133% ( 217) 00:09:22.959 10843.229 - 10902.807: 57.5363% ( 188) 00:09:22.959 10902.807 - 10962.385: 59.4009% ( 216) 00:09:22.959 10962.385 - 11021.964: 60.7476% ( 156) 00:09:22.959 11021.964 - 11081.542: 62.1288% ( 160) 00:09:22.959 11081.542 - 11141.120: 63.3028% ( 136) 00:09:22.959 11141.120 - 11200.698: 64.3733% ( 124) 00:09:22.959 11200.698 - 11260.276: 65.3747% ( 116) 00:09:22.959 11260.276 - 11319.855: 66.2983% ( 107) 00:09:22.959 11319.855 - 11379.433: 67.4119% ( 129) 00:09:22.959 11379.433 - 11439.011: 68.6032% ( 138) 00:09:22.959 11439.011 - 11498.589: 69.9845% ( 160) 00:09:22.959 11498.589 - 11558.167: 71.4520% ( 170) 00:09:22.959 11558.167 - 11617.745: 73.0663% ( 187) 00:09:22.959 11617.745 - 11677.324: 74.6633% ( 185) 00:09:22.959 11677.324 - 11736.902: 76.1568% ( 173) 00:09:22.959 11736.902 - 11796.480: 77.5811% ( 165) 00:09:22.959 11796.480 - 11856.058: 79.1091% ( 177) 00:09:22.959 11856.058 - 11915.636: 80.6285% ( 176) 00:09:22.959 11915.636 - 11975.215: 82.0442% ( 164) 00:09:22.959 11975.215 - 12034.793: 83.7448% ( 197) 00:09:22.959 12034.793 - 12094.371: 85.4454% ( 197) 00:09:22.959 12094.371 - 12153.949: 86.9820% ( 178) 00:09:22.959 12153.949 - 12213.527: 88.1992% ( 141) 00:09:22.959 12213.527 - 12273.105: 89.2524% ( 122) 00:09:22.959 12273.105 - 12332.684: 90.4782% ( 142) 00:09:22.959 12332.684 - 12392.262: 91.5314% ( 122) 00:09:22.959 12392.262 - 12451.840: 92.7831% ( 145) 00:09:22.959 12451.840 - 12511.418: 93.7068% ( 107) 00:09:22.959 12511.418 - 12570.996: 94.5356% ( 96) 00:09:22.959 12570.996 - 12630.575: 95.2175% ( 79) 00:09:22.959 12630.575 - 12690.153: 95.7700% ( 64) 00:09:22.959 12690.153 - 12749.731: 96.2535% ( 56) 00:09:22.959 12749.731 - 12809.309: 96.6851% ( 50) 00:09:22.959 12809.309 - 12868.887: 97.0908% ( 47) 00:09:22.959 12868.887 - 12928.465: 97.3757% ( 33) 00:09:22.959 12928.465 - 12988.044: 97.6088% ( 27) 00:09:22.959 12988.044 - 13047.622: 97.8505% ( 28) 00:09:22.959 13047.622 - 13107.200: 97.9972% ( 17) 00:09:22.959 13107.200 - 13166.778: 98.1008% ( 12) 00:09:22.959 13166.778 - 13226.356: 98.2131% ( 13) 00:09:22.959 13226.356 - 13285.935: 98.3080% ( 11) 00:09:22.959 13285.935 - 13345.513: 98.3771% ( 8) 00:09:22.959 13345.513 - 13405.091: 98.4375% ( 7) 00:09:22.959 13405.091 - 13464.669: 98.4634% ( 3) 00:09:22.959 13464.669 - 13524.247: 98.4893% ( 3) 00:09:22.959 13524.247 - 13583.825: 98.5152% ( 3) 00:09:22.959 13583.825 - 13643.404: 98.5325% ( 2) 00:09:22.959 13643.404 - 13702.982: 98.5497% ( 2) 00:09:22.959 13702.982 - 13762.560: 98.5843% ( 4) 00:09:22.959 13762.560 - 13822.138: 98.6792% ( 11) 00:09:22.959 13822.138 - 13881.716: 98.7655% ( 10) 00:09:22.959 13881.716 - 13941.295: 98.8087% ( 5) 00:09:22.959 13941.295 - 14000.873: 98.8432% ( 4) 00:09:22.960 14000.873 - 14060.451: 98.8691% ( 3) 00:09:22.960 14060.451 - 14120.029: 98.8950% ( 3) 00:09:22.960 23116.335 - 23235.491: 98.9296% ( 4) 00:09:22.960 23235.491 - 23354.647: 98.9641% ( 4) 00:09:22.960 23354.647 - 23473.804: 98.9986% ( 4) 00:09:22.960 23473.804 - 23592.960: 99.0331% ( 4) 00:09:22.960 23592.960 - 23712.116: 99.0677% ( 4) 00:09:22.960 23712.116 - 23831.273: 99.1022% ( 4) 00:09:22.960 23831.273 - 23950.429: 99.1367% ( 4) 00:09:22.960 23950.429 - 24069.585: 99.1713% ( 4) 00:09:22.960 24069.585 - 24188.742: 99.1972% ( 3) 00:09:22.960 24188.742 - 24307.898: 99.2317% ( 4) 00:09:22.960 24307.898 - 24427.055: 99.2576% ( 3) 00:09:22.960 24427.055 - 24546.211: 99.2921% ( 4) 00:09:22.960 24546.211 - 24665.367: 99.3267% ( 4) 00:09:22.960 24665.367 - 24784.524: 99.3526% ( 3) 00:09:22.960 24784.524 - 24903.680: 99.3871% ( 4) 00:09:22.960 24903.680 - 25022.836: 99.4130% ( 3) 00:09:22.960 25022.836 - 25141.993: 99.4389% ( 3) 00:09:22.960 25141.993 - 25261.149: 99.4475% ( 1) 00:09:22.960 29789.091 - 29908.247: 99.4648% ( 2) 00:09:22.960 29908.247 - 30027.404: 99.4993% ( 4) 00:09:22.960 30027.404 - 30146.560: 99.5252% ( 3) 00:09:22.960 30146.560 - 30265.716: 99.5597% ( 4) 00:09:22.960 30265.716 - 30384.873: 99.5943% ( 4) 00:09:22.960 30384.873 - 30504.029: 99.6288% ( 4) 00:09:22.960 30504.029 - 30742.342: 99.6892% ( 7) 00:09:22.960 30742.342 - 30980.655: 99.7497% ( 7) 00:09:22.960 30980.655 - 31218.967: 99.8187% ( 8) 00:09:22.960 31218.967 - 31457.280: 99.8791% ( 7) 00:09:22.960 31457.280 - 31695.593: 99.9396% ( 7) 00:09:22.960 31695.593 - 31933.905: 100.0000% ( 7) 00:09:22.960 00:09:22.960 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:22.960 ============================================================================== 00:09:22.960 Range in us Cumulative IO count 00:09:22.960 8638.836 - 8698.415: 0.0086% ( 1) 00:09:22.960 8698.415 - 8757.993: 0.0173% ( 1) 00:09:22.960 8757.993 - 8817.571: 0.0950% ( 9) 00:09:22.960 8817.571 - 8877.149: 0.2158% ( 14) 00:09:22.960 8877.149 - 8936.727: 0.3453% ( 15) 00:09:22.960 8936.727 - 8996.305: 0.4921% ( 17) 00:09:22.960 8996.305 - 9055.884: 0.8201% ( 38) 00:09:22.960 9055.884 - 9115.462: 1.0704% ( 29) 00:09:22.960 9115.462 - 9175.040: 1.2863% ( 25) 00:09:22.960 9175.040 - 9234.618: 1.6143% ( 38) 00:09:22.960 9234.618 - 9294.196: 2.1840% ( 66) 00:09:22.960 9294.196 - 9353.775: 2.6329% ( 52) 00:09:22.960 9353.775 - 9413.353: 3.2545% ( 72) 00:09:22.960 9413.353 - 9472.931: 4.0314% ( 90) 00:09:22.960 9472.931 - 9532.509: 5.1191% ( 126) 00:09:22.960 9532.509 - 9592.087: 6.3968% ( 148) 00:09:22.960 9592.087 - 9651.665: 7.7780% ( 160) 00:09:22.960 9651.665 - 9711.244: 9.2023% ( 165) 00:09:22.960 9711.244 - 9770.822: 10.7562% ( 180) 00:09:22.960 9770.822 - 9830.400: 12.3964% ( 190) 00:09:22.960 9830.400 - 9889.978: 14.1575% ( 204) 00:09:22.960 9889.978 - 9949.556: 16.1430% ( 230) 00:09:22.960 9949.556 - 10009.135: 18.6464% ( 290) 00:09:22.960 10009.135 - 10068.713: 21.3657% ( 315) 00:09:22.960 10068.713 - 10128.291: 24.5252% ( 366) 00:09:22.960 10128.291 - 10187.869: 27.6761% ( 365) 00:09:22.960 10187.869 - 10247.447: 30.0414% ( 274) 00:09:22.960 10247.447 - 10307.025: 33.7189% ( 426) 00:09:22.960 10307.025 - 10366.604: 36.8008% ( 357) 00:09:22.960 10366.604 - 10426.182: 39.6754% ( 333) 00:09:22.960 10426.182 - 10485.760: 41.9890% ( 268) 00:09:22.960 10485.760 - 10545.338: 44.8463% ( 331) 00:09:22.960 10545.338 - 10604.916: 47.3239% ( 287) 00:09:22.960 10604.916 - 10664.495: 49.5079% ( 253) 00:09:22.960 10664.495 - 10724.073: 51.7869% ( 264) 00:09:22.960 10724.073 - 10783.651: 54.3940% ( 302) 00:09:22.960 10783.651 - 10843.229: 56.1291% ( 201) 00:09:22.960 10843.229 - 10902.807: 57.7003% ( 182) 00:09:22.960 10902.807 - 10962.385: 59.1419% ( 167) 00:09:22.960 10962.385 - 11021.964: 60.7303% ( 184) 00:09:22.960 11021.964 - 11081.542: 61.7921% ( 123) 00:09:22.960 11081.542 - 11141.120: 62.6640% ( 101) 00:09:22.960 11141.120 - 11200.698: 63.4669% ( 93) 00:09:22.960 11200.698 - 11260.276: 64.1920% ( 84) 00:09:22.960 11260.276 - 11319.855: 65.3488% ( 134) 00:09:22.960 11319.855 - 11379.433: 66.5573% ( 140) 00:09:22.960 11379.433 - 11439.011: 67.7314% ( 136) 00:09:22.960 11439.011 - 11498.589: 69.1385% ( 163) 00:09:22.960 11498.589 - 11558.167: 70.5887% ( 168) 00:09:22.960 11558.167 - 11617.745: 72.2721% ( 195) 00:09:22.960 11617.745 - 11677.324: 74.0504% ( 206) 00:09:22.960 11677.324 - 11736.902: 75.6906% ( 190) 00:09:22.960 11736.902 - 11796.480: 77.4085% ( 199) 00:09:22.960 11796.480 - 11856.058: 78.9624% ( 180) 00:09:22.960 11856.058 - 11915.636: 80.3781% ( 164) 00:09:22.960 11915.636 - 11975.215: 81.7680% ( 161) 00:09:22.960 11975.215 - 12034.793: 83.4599% ( 196) 00:09:22.960 12034.793 - 12094.371: 85.0915% ( 189) 00:09:22.960 12094.371 - 12153.949: 86.6540% ( 181) 00:09:22.960 12153.949 - 12213.527: 88.0698% ( 164) 00:09:22.960 12213.527 - 12273.105: 89.3215% ( 145) 00:09:22.960 12273.105 - 12332.684: 90.5559% ( 143) 00:09:22.960 12332.684 - 12392.262: 91.7904% ( 143) 00:09:22.960 12392.262 - 12451.840: 92.9126% ( 130) 00:09:22.960 12451.840 - 12511.418: 94.1730% ( 146) 00:09:22.960 12511.418 - 12570.996: 94.9845% ( 94) 00:09:22.960 12570.996 - 12630.575: 95.7441% ( 88) 00:09:22.960 12630.575 - 12690.153: 96.2448% ( 58) 00:09:22.960 12690.153 - 12749.731: 96.7541% ( 59) 00:09:22.960 12749.731 - 12809.309: 97.1512% ( 46) 00:09:22.960 12809.309 - 12868.887: 97.4361% ( 33) 00:09:22.960 12868.887 - 12928.465: 97.7037% ( 31) 00:09:22.960 12928.465 - 12988.044: 97.8591% ( 18) 00:09:22.960 12988.044 - 13047.622: 98.0231% ( 19) 00:09:22.960 13047.622 - 13107.200: 98.1267% ( 12) 00:09:22.960 13107.200 - 13166.778: 98.1699% ( 5) 00:09:22.960 13166.778 - 13226.356: 98.2303% ( 7) 00:09:22.960 13226.356 - 13285.935: 98.2735% ( 5) 00:09:22.960 13285.935 - 13345.513: 98.3080% ( 4) 00:09:22.960 13345.513 - 13405.091: 98.3425% ( 4) 00:09:22.960 13524.247 - 13583.825: 98.3512% ( 1) 00:09:22.960 13583.825 - 13643.404: 98.3684% ( 2) 00:09:22.960 13643.404 - 13702.982: 98.3857% ( 2) 00:09:22.960 13702.982 - 13762.560: 98.4116% ( 3) 00:09:22.960 13762.560 - 13822.138: 98.4289% ( 2) 00:09:22.960 13822.138 - 13881.716: 98.4461% ( 2) 00:09:22.960 13881.716 - 13941.295: 98.4634% ( 2) 00:09:22.960 13941.295 - 14000.873: 98.4893% ( 3) 00:09:22.960 14000.873 - 14060.451: 98.5152% ( 3) 00:09:22.960 14060.451 - 14120.029: 98.5411% ( 3) 00:09:22.960 14120.029 - 14179.607: 98.5584% ( 2) 00:09:22.960 14179.607 - 14239.185: 98.5843% ( 3) 00:09:22.960 14239.185 - 14298.764: 98.6619% ( 9) 00:09:22.960 14298.764 - 14358.342: 98.7483% ( 10) 00:09:22.960 14358.342 - 14417.920: 98.8001% ( 6) 00:09:22.960 14417.920 - 14477.498: 98.8346% ( 4) 00:09:22.960 14477.498 - 14537.076: 98.8605% ( 3) 00:09:22.960 14537.076 - 14596.655: 98.8950% ( 4) 00:09:22.960 21686.458 - 21805.615: 98.9468% ( 6) 00:09:22.960 21805.615 - 21924.771: 99.1367% ( 22) 00:09:22.960 21924.771 - 22043.927: 99.1626% ( 3) 00:09:22.960 22043.927 - 22163.084: 99.1885% ( 3) 00:09:22.960 22163.084 - 22282.240: 99.2144% ( 3) 00:09:22.960 22282.240 - 22401.396: 99.2403% ( 3) 00:09:22.960 22401.396 - 22520.553: 99.2749% ( 4) 00:09:22.960 22520.553 - 22639.709: 99.3008% ( 3) 00:09:22.960 22639.709 - 22758.865: 99.3267% ( 3) 00:09:22.960 22758.865 - 22878.022: 99.3526% ( 3) 00:09:22.960 22878.022 - 22997.178: 99.3785% ( 3) 00:09:22.960 22997.178 - 23116.335: 99.4044% ( 3) 00:09:22.960 23116.335 - 23235.491: 99.4302% ( 3) 00:09:22.960 23235.491 - 23354.647: 99.4475% ( 2) 00:09:22.960 26571.869 - 26691.025: 99.5425% ( 11) 00:09:22.960 27882.589 - 28001.745: 99.5511% ( 1) 00:09:22.960 28001.745 - 28120.902: 99.5770% ( 3) 00:09:22.960 28120.902 - 28240.058: 99.6029% ( 3) 00:09:22.960 28240.058 - 28359.215: 99.6374% ( 4) 00:09:22.960 28359.215 - 28478.371: 99.6633% ( 3) 00:09:22.960 28478.371 - 28597.527: 99.6979% ( 4) 00:09:22.960 28597.527 - 28716.684: 99.7238% ( 3) 00:09:22.960 28716.684 - 28835.840: 99.7497% ( 3) 00:09:22.960 28835.840 - 28954.996: 99.7756% ( 3) 00:09:22.960 28954.996 - 29074.153: 99.8101% ( 4) 00:09:22.960 29074.153 - 29193.309: 99.8446% ( 4) 00:09:22.960 29193.309 - 29312.465: 99.8705% ( 3) 00:09:22.960 29312.465 - 29431.622: 99.8964% ( 3) 00:09:22.960 29431.622 - 29550.778: 99.9309% ( 4) 00:09:22.960 29550.778 - 29669.935: 99.9655% ( 4) 00:09:22.960 29669.935 - 29789.091: 100.0000% ( 4) 00:09:22.960 00:09:22.960 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:22.960 ============================================================================== 00:09:22.960 Range in us Cumulative IO count 00:09:22.960 8519.680 - 8579.258: 0.0086% ( 1) 00:09:22.961 8638.836 - 8698.415: 0.0259% ( 2) 00:09:22.961 8698.415 - 8757.993: 0.0863% ( 7) 00:09:22.961 8757.993 - 8817.571: 0.1381% ( 6) 00:09:22.961 8817.571 - 8877.149: 0.2244% ( 10) 00:09:22.961 8877.149 - 8936.727: 0.3885% ( 19) 00:09:22.961 8936.727 - 8996.305: 0.5784% ( 22) 00:09:22.961 8996.305 - 9055.884: 0.7769% ( 23) 00:09:22.961 9055.884 - 9115.462: 1.0877% ( 36) 00:09:22.961 9115.462 - 9175.040: 1.3898% ( 35) 00:09:22.961 9175.040 - 9234.618: 1.7265% ( 39) 00:09:22.961 9234.618 - 9294.196: 2.1323% ( 47) 00:09:22.961 9294.196 - 9353.775: 2.7020% ( 66) 00:09:22.961 9353.775 - 9413.353: 3.6084% ( 105) 00:09:22.961 9413.353 - 9472.931: 4.3508% ( 86) 00:09:22.961 9472.931 - 9532.509: 5.2141% ( 100) 00:09:22.961 9532.509 - 9592.087: 6.2500% ( 120) 00:09:22.961 9592.087 - 9651.665: 7.2859% ( 120) 00:09:22.961 9651.665 - 9711.244: 8.7017% ( 164) 00:09:22.961 9711.244 - 9770.822: 10.3850% ( 195) 00:09:22.961 9770.822 - 9830.400: 11.8785% ( 173) 00:09:22.961 9830.400 - 9889.978: 13.7258% ( 214) 00:09:22.961 9889.978 - 9949.556: 15.7890% ( 239) 00:09:22.961 9949.556 - 10009.135: 18.6464% ( 331) 00:09:22.961 10009.135 - 10068.713: 21.2966% ( 307) 00:09:22.961 10068.713 - 10128.291: 23.4979% ( 255) 00:09:22.961 10128.291 - 10187.869: 26.5452% ( 353) 00:09:22.961 10187.869 - 10247.447: 30.4903% ( 457) 00:09:22.961 10247.447 - 10307.025: 33.9693% ( 403) 00:09:22.961 10307.025 - 10366.604: 36.9389% ( 344) 00:09:22.961 10366.604 - 10426.182: 39.5546% ( 303) 00:09:22.961 10426.182 - 10485.760: 42.3602% ( 325) 00:09:22.961 10485.760 - 10545.338: 45.0190% ( 308) 00:09:22.961 10545.338 - 10604.916: 47.4448% ( 281) 00:09:22.961 10604.916 - 10664.495: 50.4057% ( 343) 00:09:22.961 10664.495 - 10724.073: 52.4948% ( 242) 00:09:22.961 10724.073 - 10783.651: 54.7048% ( 256) 00:09:22.961 10783.651 - 10843.229: 56.2327% ( 177) 00:09:22.961 10843.229 - 10902.807: 57.7089% ( 171) 00:09:22.961 10902.807 - 10962.385: 59.0901% ( 160) 00:09:22.961 10962.385 - 11021.964: 60.3419% ( 145) 00:09:22.961 11021.964 - 11081.542: 61.6108% ( 147) 00:09:22.961 11081.542 - 11141.120: 62.3619% ( 87) 00:09:22.961 11141.120 - 11200.698: 63.6913% ( 154) 00:09:22.961 11200.698 - 11260.276: 64.3733% ( 79) 00:09:22.961 11260.276 - 11319.855: 65.0984% ( 84) 00:09:22.961 11319.855 - 11379.433: 66.3847% ( 149) 00:09:22.961 11379.433 - 11439.011: 67.5069% ( 130) 00:09:22.961 11439.011 - 11498.589: 68.7155% ( 140) 00:09:22.961 11498.589 - 11558.167: 70.1744% ( 169) 00:09:22.961 11558.167 - 11617.745: 71.5211% ( 156) 00:09:22.961 11617.745 - 11677.324: 73.4202% ( 220) 00:09:22.961 11677.324 - 11736.902: 75.3367% ( 222) 00:09:22.961 11736.902 - 11796.480: 77.0459% ( 198) 00:09:22.961 11796.480 - 11856.058: 78.6343% ( 184) 00:09:22.961 11856.058 - 11915.636: 80.1537% ( 176) 00:09:22.961 11915.636 - 11975.215: 82.1392% ( 230) 00:09:22.961 11975.215 - 12034.793: 83.8829% ( 202) 00:09:22.961 12034.793 - 12094.371: 85.4541% ( 182) 00:09:22.961 12094.371 - 12153.949: 86.9648% ( 175) 00:09:22.961 12153.949 - 12213.527: 88.2424% ( 148) 00:09:22.961 12213.527 - 12273.105: 89.4941% ( 145) 00:09:22.961 12273.105 - 12332.684: 90.6768% ( 137) 00:09:22.961 12332.684 - 12392.262: 91.6264% ( 110) 00:09:22.961 12392.262 - 12451.840: 92.6191% ( 115) 00:09:22.961 12451.840 - 12511.418: 93.5946% ( 113) 00:09:22.961 12511.418 - 12570.996: 94.6046% ( 117) 00:09:22.961 12570.996 - 12630.575: 95.5197% ( 106) 00:09:22.961 12630.575 - 12690.153: 96.2017% ( 79) 00:09:22.961 12690.153 - 12749.731: 96.7628% ( 65) 00:09:22.961 12749.731 - 12809.309: 97.1426% ( 44) 00:09:22.961 12809.309 - 12868.887: 97.4275% ( 33) 00:09:22.961 12868.887 - 12928.465: 97.7037% ( 32) 00:09:22.961 12928.465 - 12988.044: 97.8936% ( 22) 00:09:22.961 12988.044 - 13047.622: 98.0145% ( 14) 00:09:22.961 13047.622 - 13107.200: 98.0749% ( 7) 00:09:22.961 13107.200 - 13166.778: 98.1354% ( 7) 00:09:22.961 13166.778 - 13226.356: 98.1785% ( 5) 00:09:22.961 13226.356 - 13285.935: 98.2217% ( 5) 00:09:22.961 13285.935 - 13345.513: 98.2648% ( 5) 00:09:22.961 13345.513 - 13405.091: 98.3166% ( 6) 00:09:22.961 13405.091 - 13464.669: 98.3425% ( 3) 00:09:22.961 14000.873 - 14060.451: 98.3512% ( 1) 00:09:22.961 14060.451 - 14120.029: 98.3684% ( 2) 00:09:22.961 14120.029 - 14179.607: 98.3943% ( 3) 00:09:22.961 14179.607 - 14239.185: 98.4116% ( 2) 00:09:22.961 14239.185 - 14298.764: 98.4289% ( 2) 00:09:22.961 14298.764 - 14358.342: 98.4548% ( 3) 00:09:22.961 14358.342 - 14417.920: 98.4807% ( 3) 00:09:22.961 14417.920 - 14477.498: 98.5066% ( 3) 00:09:22.961 14477.498 - 14537.076: 98.5238% ( 2) 00:09:22.961 14537.076 - 14596.655: 98.5497% ( 3) 00:09:22.961 14596.655 - 14656.233: 98.5756% ( 3) 00:09:22.961 14656.233 - 14715.811: 98.6015% ( 3) 00:09:22.961 14715.811 - 14775.389: 98.6274% ( 3) 00:09:22.961 14775.389 - 14834.967: 98.6533% ( 3) 00:09:22.961 14834.967 - 14894.545: 98.7224% ( 8) 00:09:22.961 14894.545 - 14954.124: 98.7828% ( 7) 00:09:22.961 14954.124 - 15013.702: 98.8432% ( 7) 00:09:22.961 15013.702 - 15073.280: 98.8519% ( 1) 00:09:22.961 15073.280 - 15132.858: 98.8691% ( 2) 00:09:22.961 15132.858 - 15192.436: 98.8778% ( 1) 00:09:22.961 15192.436 - 15252.015: 98.8864% ( 1) 00:09:22.961 15252.015 - 15371.171: 98.8950% ( 1) 00:09:22.961 19422.487 - 19541.644: 98.9037% ( 1) 00:09:22.961 19541.644 - 19660.800: 99.0159% ( 13) 00:09:22.961 19660.800 - 19779.956: 99.1108% ( 11) 00:09:22.961 19779.956 - 19899.113: 99.1367% ( 3) 00:09:22.961 19899.113 - 20018.269: 99.1713% ( 4) 00:09:22.961 20018.269 - 20137.425: 99.1972% ( 3) 00:09:22.961 20137.425 - 20256.582: 99.2144% ( 2) 00:09:22.961 20256.582 - 20375.738: 99.2490% ( 4) 00:09:22.961 20375.738 - 20494.895: 99.2749% ( 3) 00:09:22.961 20494.895 - 20614.051: 99.3008% ( 3) 00:09:22.961 20614.051 - 20733.207: 99.3353% ( 4) 00:09:22.961 20733.207 - 20852.364: 99.3612% ( 3) 00:09:22.961 20852.364 - 20971.520: 99.3871% ( 3) 00:09:22.961 20971.520 - 21090.676: 99.4216% ( 4) 00:09:22.961 21090.676 - 21209.833: 99.4475% ( 3) 00:09:22.961 24427.055 - 24546.211: 99.4907% ( 5) 00:09:22.961 24546.211 - 24665.367: 99.5166% ( 3) 00:09:22.961 24665.367 - 24784.524: 99.5597% ( 5) 00:09:22.961 24784.524 - 24903.680: 99.5684% ( 1) 00:09:22.961 25856.931 - 25976.087: 99.5770% ( 1) 00:09:22.961 25976.087 - 26095.244: 99.6029% ( 3) 00:09:22.961 26095.244 - 26214.400: 99.6374% ( 4) 00:09:22.961 26214.400 - 26333.556: 99.6633% ( 3) 00:09:22.961 26333.556 - 26452.713: 99.6892% ( 3) 00:09:22.961 26452.713 - 26571.869: 99.7151% ( 3) 00:09:22.961 26571.869 - 26691.025: 99.7410% ( 3) 00:09:22.961 26691.025 - 26810.182: 99.7756% ( 4) 00:09:22.961 26810.182 - 26929.338: 99.8015% ( 3) 00:09:22.961 26929.338 - 27048.495: 99.8360% ( 4) 00:09:22.961 27048.495 - 27167.651: 99.8619% ( 3) 00:09:22.961 27167.651 - 27286.807: 99.8878% ( 3) 00:09:22.961 27286.807 - 27405.964: 99.9137% ( 3) 00:09:22.961 27405.964 - 27525.120: 99.9482% ( 4) 00:09:22.961 27525.120 - 27644.276: 99.9827% ( 4) 00:09:22.961 27644.276 - 27763.433: 100.0000% ( 2) 00:09:22.961 00:09:22.961 11:21:05 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:22.961 00:09:22.961 real 0m2.728s 00:09:22.961 user 0m2.308s 00:09:22.961 sys 0m0.301s 00:09:22.961 11:21:05 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:22.961 11:21:05 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:22.961 ************************************ 00:09:22.961 END TEST nvme_perf 00:09:22.961 ************************************ 00:09:22.961 11:21:05 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:22.961 11:21:05 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:22.961 11:21:05 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:22.961 11:21:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.961 ************************************ 00:09:22.961 START TEST nvme_hello_world 00:09:22.961 ************************************ 00:09:22.961 11:21:05 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:23.219 Initializing NVMe Controllers 00:09:23.219 Attached to 0000:00:10.0 00:09:23.219 Namespace ID: 1 size: 6GB 00:09:23.219 Attached to 0000:00:11.0 00:09:23.219 Namespace ID: 1 size: 5GB 00:09:23.219 Attached to 0000:00:13.0 00:09:23.219 Namespace ID: 1 size: 1GB 00:09:23.219 Attached to 0000:00:12.0 00:09:23.219 Namespace ID: 1 size: 4GB 00:09:23.219 Namespace ID: 2 size: 4GB 00:09:23.219 Namespace ID: 3 size: 4GB 00:09:23.219 Initialization complete. 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 INFO: using host memory buffer for IO 00:09:23.219 Hello world! 00:09:23.219 00:09:23.219 real 0m0.353s 00:09:23.219 user 0m0.143s 00:09:23.219 sys 0m0.161s 00:09:23.219 11:21:06 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.219 11:21:06 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:23.219 ************************************ 00:09:23.219 END TEST nvme_hello_world 00:09:23.219 ************************************ 00:09:23.219 11:21:06 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:23.219 11:21:06 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:23.219 11:21:06 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.219 11:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.219 ************************************ 00:09:23.219 START TEST nvme_sgl 00:09:23.219 ************************************ 00:09:23.219 11:21:06 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:23.478 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:23.478 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:23.736 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:23.736 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:23.736 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:23.736 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:23.736 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:23.736 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:23.736 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:23.736 NVMe Readv/Writev Request test 00:09:23.736 Attached to 0000:00:10.0 00:09:23.736 Attached to 0000:00:11.0 00:09:23.736 Attached to 0000:00:13.0 00:09:23.736 Attached to 0000:00:12.0 00:09:23.736 0000:00:10.0: build_io_request_2 test passed 00:09:23.736 0000:00:10.0: build_io_request_4 test passed 00:09:23.736 0000:00:10.0: build_io_request_5 test passed 00:09:23.736 0000:00:10.0: build_io_request_6 test passed 00:09:23.736 0000:00:10.0: build_io_request_7 test passed 00:09:23.736 0000:00:10.0: build_io_request_10 test passed 00:09:23.736 0000:00:11.0: build_io_request_2 test passed 00:09:23.736 0000:00:11.0: build_io_request_4 test passed 00:09:23.736 0000:00:11.0: build_io_request_5 test passed 00:09:23.736 0000:00:11.0: build_io_request_6 test passed 00:09:23.736 0000:00:11.0: build_io_request_7 test passed 00:09:23.736 0000:00:11.0: build_io_request_10 test passed 00:09:23.736 Cleaning up... 00:09:23.736 00:09:23.736 real 0m0.439s 00:09:23.736 user 0m0.217s 00:09:23.736 sys 0m0.173s 00:09:23.736 11:21:06 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.736 11:21:06 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:23.736 ************************************ 00:09:23.736 END TEST nvme_sgl 00:09:23.736 ************************************ 00:09:23.736 11:21:06 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:23.736 11:21:06 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:23.736 11:21:06 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.736 11:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.736 ************************************ 00:09:23.736 START TEST nvme_e2edp 00:09:23.736 ************************************ 00:09:23.736 11:21:06 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:24.301 NVMe Write/Read with End-to-End data protection test 00:09:24.301 Attached to 0000:00:10.0 00:09:24.301 Attached to 0000:00:11.0 00:09:24.301 Attached to 0000:00:13.0 00:09:24.301 Attached to 0000:00:12.0 00:09:24.301 Cleaning up... 00:09:24.301 00:09:24.301 real 0m0.337s 00:09:24.301 user 0m0.126s 00:09:24.301 sys 0m0.163s 00:09:24.301 11:21:06 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.301 ************************************ 00:09:24.301 11:21:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:24.301 END TEST nvme_e2edp 00:09:24.301 ************************************ 00:09:24.301 11:21:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:24.301 11:21:06 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:24.301 11:21:06 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.301 11:21:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.301 ************************************ 00:09:24.301 START TEST nvme_reserve 00:09:24.301 ************************************ 00:09:24.301 11:21:07 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:24.558 ===================================================== 00:09:24.558 NVMe Controller at PCI bus 0, device 16, function 0 00:09:24.558 ===================================================== 00:09:24.558 Reservations: Not Supported 00:09:24.558 ===================================================== 00:09:24.558 NVMe Controller at PCI bus 0, device 17, function 0 00:09:24.558 ===================================================== 00:09:24.558 Reservations: Not Supported 00:09:24.558 ===================================================== 00:09:24.558 NVMe Controller at PCI bus 0, device 19, function 0 00:09:24.558 ===================================================== 00:09:24.558 Reservations: Not Supported 00:09:24.558 ===================================================== 00:09:24.558 NVMe Controller at PCI bus 0, device 18, function 0 00:09:24.558 ===================================================== 00:09:24.558 Reservations: Not Supported 00:09:24.558 Reservation test passed 00:09:24.558 00:09:24.558 real 0m0.333s 00:09:24.558 user 0m0.131s 00:09:24.558 sys 0m0.157s 00:09:24.558 11:21:07 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.558 11:21:07 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 ************************************ 00:09:24.558 END TEST nvme_reserve 00:09:24.558 ************************************ 00:09:24.558 11:21:07 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:24.558 11:21:07 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:24.558 11:21:07 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.558 11:21:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 ************************************ 00:09:24.558 START TEST nvme_err_injection 00:09:24.558 ************************************ 00:09:24.558 11:21:07 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:24.815 NVMe Error Injection test 00:09:24.815 Attached to 0000:00:10.0 00:09:24.815 Attached to 0000:00:11.0 00:09:24.815 Attached to 0000:00:13.0 00:09:24.815 Attached to 0000:00:12.0 00:09:24.815 0000:00:12.0: get features failed as expected 00:09:24.815 0000:00:10.0: get features failed as expected 00:09:24.815 0000:00:11.0: get features failed as expected 00:09:24.815 0000:00:13.0: get features failed as expected 00:09:24.815 0000:00:10.0: get features successfully as expected 00:09:24.815 0000:00:11.0: get features successfully as expected 00:09:24.815 0000:00:13.0: get features successfully as expected 00:09:24.815 0000:00:12.0: get features successfully as expected 00:09:24.815 0000:00:10.0: read failed as expected 00:09:24.816 0000:00:11.0: read failed as expected 00:09:24.816 0000:00:13.0: read failed as expected 00:09:24.816 0000:00:12.0: read failed as expected 00:09:24.816 0000:00:10.0: read successfully as expected 00:09:24.816 0000:00:11.0: read successfully as expected 00:09:24.816 0000:00:13.0: read successfully as expected 00:09:24.816 0000:00:12.0: read successfully as expected 00:09:24.816 Cleaning up... 00:09:24.816 ************************************ 00:09:24.816 END TEST nvme_err_injection 00:09:24.816 ************************************ 00:09:24.816 00:09:24.816 real 0m0.364s 00:09:24.816 user 0m0.136s 00:09:24.816 sys 0m0.178s 00:09:24.816 11:21:07 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.816 11:21:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:25.138 11:21:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:25.138 11:21:07 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:09:25.138 11:21:07 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.138 11:21:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.138 ************************************ 00:09:25.138 START TEST nvme_overhead 00:09:25.138 ************************************ 00:09:25.138 11:21:07 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:26.530 Initializing NVMe Controllers 00:09:26.530 Attached to 0000:00:10.0 00:09:26.530 Attached to 0000:00:11.0 00:09:26.530 Attached to 0000:00:13.0 00:09:26.530 Attached to 0000:00:12.0 00:09:26.530 Initialization complete. Launching workers. 00:09:26.530 submit (in ns) avg, min, max = 16078.2, 12270.0, 98314.1 00:09:26.530 complete (in ns) avg, min, max = 11655.2, 8555.5, 512709.5 00:09:26.530 00:09:26.530 Submit histogram 00:09:26.530 ================ 00:09:26.530 Range in us Cumulative Count 00:09:26.530 12.218 - 12.276: 0.0128% ( 1) 00:09:26.530 12.276 - 12.335: 0.0255% ( 1) 00:09:26.530 12.335 - 12.393: 0.0638% ( 3) 00:09:26.530 12.393 - 12.451: 0.1021% ( 3) 00:09:26.530 12.451 - 12.509: 0.1531% ( 4) 00:09:26.530 12.509 - 12.567: 0.3573% ( 16) 00:09:26.530 12.567 - 12.625: 0.7656% ( 32) 00:09:26.530 12.625 - 12.684: 1.5822% ( 64) 00:09:26.530 12.684 - 12.742: 2.8710% ( 101) 00:09:26.530 12.742 - 12.800: 4.1215% ( 98) 00:09:26.530 12.800 - 12.858: 5.4102% ( 101) 00:09:26.530 12.858 - 12.916: 6.5076% ( 86) 00:09:26.530 12.916 - 12.975: 7.5539% ( 82) 00:09:26.530 12.975 - 13.033: 9.1999% ( 129) 00:09:26.530 13.033 - 13.091: 11.0374% ( 144) 00:09:26.530 13.091 - 13.149: 13.3214% ( 179) 00:09:26.530 13.149 - 13.207: 15.4651% ( 168) 00:09:26.530 13.207 - 13.265: 17.6215% ( 169) 00:09:26.530 13.265 - 13.324: 19.2165% ( 125) 00:09:26.530 13.324 - 13.382: 20.9519% ( 136) 00:09:26.530 13.382 - 13.440: 22.6107% ( 130) 00:09:26.530 13.440 - 13.498: 24.5502% ( 152) 00:09:26.530 13.498 - 13.556: 26.8980% ( 184) 00:09:26.530 13.556 - 13.615: 29.1693% ( 178) 00:09:26.530 13.615 - 13.673: 31.5172% ( 184) 00:09:26.530 13.673 - 13.731: 33.9926% ( 194) 00:09:26.530 13.731 - 13.789: 36.4808% ( 195) 00:09:26.530 13.789 - 13.847: 38.7010% ( 174) 00:09:26.530 13.847 - 13.905: 40.7809% ( 163) 00:09:26.530 13.905 - 13.964: 42.9756% ( 172) 00:09:26.530 13.964 - 14.022: 45.3235% ( 184) 00:09:26.530 14.022 - 14.080: 47.6458% ( 182) 00:09:26.530 14.080 - 14.138: 49.7895% ( 168) 00:09:26.530 14.138 - 14.196: 51.7800% ( 156) 00:09:26.530 14.196 - 14.255: 53.3878% ( 126) 00:09:26.530 14.255 - 14.313: 55.2635% ( 147) 00:09:26.530 14.313 - 14.371: 56.9989% ( 136) 00:09:26.530 14.371 - 14.429: 58.5683% ( 123) 00:09:26.530 14.429 - 14.487: 59.9081% ( 105) 00:09:26.530 14.487 - 14.545: 61.1076% ( 94) 00:09:26.530 14.545 - 14.604: 62.2049% ( 86) 00:09:26.530 14.604 - 14.662: 62.9195% ( 56) 00:09:26.530 14.662 - 14.720: 63.4809% ( 44) 00:09:26.530 14.720 - 14.778: 64.1700% ( 54) 00:09:26.530 14.778 - 14.836: 64.7314% ( 44) 00:09:26.530 14.836 - 14.895: 65.2163% ( 38) 00:09:26.530 14.895 - 15.011: 65.9308% ( 56) 00:09:26.530 15.011 - 15.127: 66.4795% ( 43) 00:09:26.530 15.127 - 15.244: 66.8496% ( 29) 00:09:26.530 15.244 - 15.360: 67.0920% ( 19) 00:09:26.530 15.360 - 15.476: 67.3344% ( 19) 00:09:26.530 15.476 - 15.593: 67.5003% ( 13) 00:09:26.530 15.593 - 15.709: 67.6152% ( 9) 00:09:26.530 15.709 - 15.825: 67.6790% ( 5) 00:09:26.530 15.825 - 15.942: 67.7555% ( 6) 00:09:26.530 15.942 - 16.058: 67.8066% ( 4) 00:09:26.530 16.058 - 16.175: 67.8448% ( 3) 00:09:26.530 16.175 - 16.291: 67.8959% ( 4) 00:09:26.530 16.291 - 16.407: 67.9214% ( 2) 00:09:26.530 16.407 - 16.524: 67.9469% ( 2) 00:09:26.530 16.524 - 16.640: 68.0490% ( 8) 00:09:26.530 16.640 - 16.756: 68.0745% ( 2) 00:09:26.530 16.873 - 16.989: 68.1256% ( 4) 00:09:26.530 16.989 - 17.105: 68.1766% ( 4) 00:09:26.530 17.105 - 17.222: 68.2276% ( 4) 00:09:26.530 17.222 - 17.338: 68.7891% ( 44) 00:09:26.530 17.338 - 17.455: 70.9200% ( 167) 00:09:26.530 17.455 - 17.571: 75.2456% ( 339) 00:09:26.530 17.571 - 17.687: 79.3416% ( 321) 00:09:26.530 17.687 - 17.804: 81.9063% ( 201) 00:09:26.530 17.804 - 17.920: 83.0547% ( 90) 00:09:26.530 17.920 - 18.036: 83.9862% ( 73) 00:09:26.530 18.036 - 18.153: 85.0453% ( 83) 00:09:26.530 18.153 - 18.269: 86.0151% ( 76) 00:09:26.530 18.269 - 18.385: 86.5255% ( 40) 00:09:26.530 18.385 - 18.502: 86.9848% ( 36) 00:09:26.530 18.502 - 18.618: 87.2911% ( 24) 00:09:26.530 18.618 - 18.735: 87.6994% ( 32) 00:09:26.530 18.735 - 18.851: 88.0056% ( 24) 00:09:26.530 18.851 - 18.967: 88.1970% ( 15) 00:09:26.530 18.967 - 19.084: 88.3757% ( 14) 00:09:26.530 19.084 - 19.200: 88.5798% ( 16) 00:09:26.530 19.200 - 19.316: 88.7202% ( 11) 00:09:26.530 19.316 - 19.433: 88.9243% ( 16) 00:09:26.530 19.433 - 19.549: 89.0519% ( 10) 00:09:26.530 19.549 - 19.665: 89.1285% ( 6) 00:09:26.530 19.665 - 19.782: 89.3071% ( 14) 00:09:26.530 19.782 - 19.898: 89.4092% ( 8) 00:09:26.530 19.898 - 20.015: 89.5241% ( 9) 00:09:26.530 20.015 - 20.131: 89.6644% ( 11) 00:09:26.530 20.131 - 20.247: 89.8048% ( 11) 00:09:26.530 20.247 - 20.364: 89.9324% ( 10) 00:09:26.530 20.364 - 20.480: 90.0600% ( 10) 00:09:26.530 20.480 - 20.596: 90.1876% ( 10) 00:09:26.530 20.596 - 20.713: 90.3407% ( 12) 00:09:26.530 20.713 - 20.829: 90.4300% ( 7) 00:09:26.530 20.829 - 20.945: 90.5449% ( 9) 00:09:26.530 20.945 - 21.062: 90.6597% ( 9) 00:09:26.530 21.062 - 21.178: 90.6980% ( 3) 00:09:26.530 21.178 - 21.295: 90.7873% ( 7) 00:09:26.530 21.295 - 21.411: 90.8639% ( 6) 00:09:26.530 21.411 - 21.527: 90.9659% ( 8) 00:09:26.530 21.527 - 21.644: 91.0808% ( 9) 00:09:26.530 21.644 - 21.760: 91.1318% ( 4) 00:09:26.530 21.760 - 21.876: 91.2594% ( 10) 00:09:26.530 21.876 - 21.993: 91.3232% ( 5) 00:09:26.530 21.993 - 22.109: 91.3487% ( 2) 00:09:26.530 22.109 - 22.225: 91.4381% ( 7) 00:09:26.530 22.225 - 22.342: 91.5146% ( 6) 00:09:26.530 22.342 - 22.458: 91.5529% ( 3) 00:09:26.530 22.458 - 22.575: 91.5912% ( 3) 00:09:26.530 22.575 - 22.691: 91.6295% ( 3) 00:09:26.530 22.691 - 22.807: 91.6805% ( 4) 00:09:26.530 22.807 - 22.924: 91.7698% ( 7) 00:09:26.530 22.924 - 23.040: 91.8464% ( 6) 00:09:26.530 23.040 - 23.156: 91.9612% ( 9) 00:09:26.530 23.156 - 23.273: 92.0122% ( 4) 00:09:26.530 23.273 - 23.389: 92.0633% ( 4) 00:09:26.530 23.389 - 23.505: 92.1143% ( 4) 00:09:26.530 23.505 - 23.622: 92.1909% ( 6) 00:09:26.530 23.622 - 23.738: 92.2547% ( 5) 00:09:26.530 23.738 - 23.855: 92.3312% ( 6) 00:09:26.530 23.855 - 23.971: 92.4078% ( 6) 00:09:26.530 23.971 - 24.087: 92.5099% ( 8) 00:09:26.530 24.087 - 24.204: 92.6120% ( 8) 00:09:26.530 24.204 - 24.320: 92.6630% ( 4) 00:09:26.530 24.320 - 24.436: 92.7013% ( 3) 00:09:26.530 24.436 - 24.553: 92.7651% ( 5) 00:09:26.530 24.553 - 24.669: 92.8289% ( 5) 00:09:26.530 24.669 - 24.785: 92.8672% ( 3) 00:09:26.530 24.785 - 24.902: 92.9182% ( 4) 00:09:26.530 24.902 - 25.018: 92.9820% ( 5) 00:09:26.530 25.018 - 25.135: 93.0075% ( 2) 00:09:26.530 25.135 - 25.251: 93.0330% ( 2) 00:09:26.530 25.251 - 25.367: 93.0968% ( 5) 00:09:26.530 25.367 - 25.484: 93.1734% ( 6) 00:09:26.530 25.484 - 25.600: 93.2244% ( 4) 00:09:26.530 25.600 - 25.716: 93.3010% ( 6) 00:09:26.531 25.716 - 25.833: 93.3776% ( 6) 00:09:26.531 25.833 - 25.949: 93.3903% ( 1) 00:09:26.531 25.949 - 26.065: 93.4158% ( 2) 00:09:26.531 26.065 - 26.182: 93.4541% ( 3) 00:09:26.531 26.182 - 26.298: 93.4924% ( 3) 00:09:26.531 26.298 - 26.415: 93.5179% ( 2) 00:09:26.531 26.415 - 26.531: 93.5307% ( 1) 00:09:26.531 26.531 - 26.647: 93.5434% ( 1) 00:09:26.531 26.647 - 26.764: 93.6200% ( 6) 00:09:26.531 26.764 - 26.880: 93.6455% ( 2) 00:09:26.531 26.880 - 26.996: 93.6838% ( 3) 00:09:26.531 26.996 - 27.113: 93.7093% ( 2) 00:09:26.531 27.113 - 27.229: 93.7476% ( 3) 00:09:26.531 27.345 - 27.462: 93.8114% ( 5) 00:09:26.531 27.462 - 27.578: 93.9135% ( 8) 00:09:26.531 27.578 - 27.695: 93.9900% ( 6) 00:09:26.531 27.695 - 27.811: 94.0283% ( 3) 00:09:26.531 27.811 - 27.927: 94.2708% ( 19) 00:09:26.531 27.927 - 28.044: 94.4111% ( 11) 00:09:26.531 28.044 - 28.160: 94.5770% ( 13) 00:09:26.531 28.160 - 28.276: 94.7939% ( 17) 00:09:26.531 28.276 - 28.393: 94.9726% ( 14) 00:09:26.531 28.393 - 28.509: 95.1767% ( 16) 00:09:26.531 28.509 - 28.625: 95.4574% ( 22) 00:09:26.531 28.625 - 28.742: 95.8020% ( 27) 00:09:26.531 28.742 - 28.858: 96.3379% ( 42) 00:09:26.531 28.858 - 28.975: 96.7207% ( 30) 00:09:26.531 28.975 - 29.091: 97.0397% ( 25) 00:09:26.531 29.091 - 29.207: 97.4225% ( 30) 00:09:26.531 29.207 - 29.324: 97.6139% ( 15) 00:09:26.531 29.324 - 29.440: 97.8946% ( 22) 00:09:26.531 29.440 - 29.556: 98.1881% ( 23) 00:09:26.531 29.556 - 29.673: 98.3795% ( 15) 00:09:26.531 29.673 - 29.789: 98.4560% ( 6) 00:09:26.531 29.789 - 30.022: 98.5964% ( 11) 00:09:26.531 30.022 - 30.255: 98.6985% ( 8) 00:09:26.531 30.255 - 30.487: 98.7368% ( 3) 00:09:26.531 30.487 - 30.720: 98.8006% ( 5) 00:09:26.531 30.720 - 30.953: 98.8644% ( 5) 00:09:26.531 30.953 - 31.185: 98.8899% ( 2) 00:09:26.531 31.185 - 31.418: 98.9154% ( 2) 00:09:26.531 31.418 - 31.651: 98.9282% ( 1) 00:09:26.531 32.349 - 32.582: 98.9537% ( 2) 00:09:26.531 33.047 - 33.280: 98.9920% ( 3) 00:09:26.531 33.280 - 33.513: 99.0047% ( 1) 00:09:26.531 33.513 - 33.745: 99.0813% ( 6) 00:09:26.531 33.745 - 33.978: 99.0940% ( 1) 00:09:26.531 33.978 - 34.211: 99.1196% ( 2) 00:09:26.531 34.211 - 34.444: 99.1451% ( 2) 00:09:26.531 34.444 - 34.676: 99.1834% ( 3) 00:09:26.531 34.676 - 34.909: 99.2599% ( 6) 00:09:26.531 34.909 - 35.142: 99.3492% ( 7) 00:09:26.531 35.142 - 35.375: 99.4130% ( 5) 00:09:26.531 35.375 - 35.607: 99.4386% ( 2) 00:09:26.531 35.607 - 35.840: 99.4896% ( 4) 00:09:26.531 35.840 - 36.073: 99.5151% ( 2) 00:09:26.531 36.538 - 36.771: 99.5406% ( 2) 00:09:26.531 36.771 - 37.004: 99.5662% ( 2) 00:09:26.531 37.004 - 37.236: 99.5789% ( 1) 00:09:26.531 37.236 - 37.469: 99.5917% ( 1) 00:09:26.531 37.469 - 37.702: 99.6172% ( 2) 00:09:26.531 37.935 - 38.167: 99.6427% ( 2) 00:09:26.531 38.400 - 38.633: 99.6810% ( 3) 00:09:26.531 38.633 - 38.865: 99.7065% ( 2) 00:09:26.531 40.029 - 40.262: 99.7193% ( 1) 00:09:26.531 40.262 - 40.495: 99.7320% ( 1) 00:09:26.531 40.495 - 40.727: 99.7576% ( 2) 00:09:26.531 40.960 - 41.193: 99.7703% ( 1) 00:09:26.531 41.425 - 41.658: 99.7958% ( 2) 00:09:26.531 43.287 - 43.520: 99.8086% ( 1) 00:09:26.531 43.520 - 43.753: 99.8341% ( 2) 00:09:26.531 43.985 - 44.218: 99.8596% ( 2) 00:09:26.531 44.218 - 44.451: 99.8724% ( 1) 00:09:26.531 47.244 - 47.476: 99.8852% ( 1) 00:09:26.531 47.709 - 47.942: 99.9107% ( 2) 00:09:26.531 47.942 - 48.175: 99.9234% ( 1) 00:09:26.531 49.571 - 49.804: 99.9362% ( 1) 00:09:26.531 52.131 - 52.364: 99.9490% ( 1) 00:09:26.531 53.295 - 53.527: 99.9617% ( 1) 00:09:26.531 57.716 - 57.949: 99.9745% ( 1) 00:09:26.531 80.524 - 80.989: 99.9872% ( 1) 00:09:26.531 98.211 - 98.676: 100.0000% ( 1) 00:09:26.531 00:09:26.531 Complete histogram 00:09:26.531 ================== 00:09:26.531 Range in us Cumulative Count 00:09:26.531 8.553 - 8.611: 0.1021% ( 8) 00:09:26.531 8.611 - 8.669: 0.4721% ( 29) 00:09:26.531 8.669 - 8.727: 1.0463% ( 45) 00:09:26.531 8.727 - 8.785: 1.9778% ( 73) 00:09:26.531 8.785 - 8.844: 3.3686% ( 109) 00:09:26.531 8.844 - 8.902: 5.2061% ( 144) 00:09:26.531 8.902 - 8.960: 7.1966% ( 156) 00:09:26.531 8.960 - 9.018: 9.6593% ( 193) 00:09:26.531 9.018 - 9.076: 12.3006% ( 207) 00:09:26.531 9.076 - 9.135: 14.8909% ( 203) 00:09:26.531 9.135 - 9.193: 18.1702% ( 257) 00:09:26.531 9.193 - 9.251: 21.9855% ( 299) 00:09:26.531 9.251 - 9.309: 26.5152% ( 355) 00:09:26.531 9.309 - 9.367: 31.3896% ( 382) 00:09:26.531 9.367 - 9.425: 36.2001% ( 377) 00:09:26.531 9.425 - 9.484: 40.6661% ( 350) 00:09:26.531 9.484 - 9.542: 44.3920% ( 292) 00:09:26.531 9.542 - 9.600: 47.5309% ( 246) 00:09:26.531 9.600 - 9.658: 50.6061% ( 241) 00:09:26.531 9.658 - 9.716: 53.5792% ( 233) 00:09:26.531 9.716 - 9.775: 55.8505% ( 178) 00:09:26.531 9.775 - 9.833: 57.7900% ( 152) 00:09:26.531 9.833 - 9.891: 58.9766% ( 93) 00:09:26.531 9.891 - 9.949: 59.8188% ( 66) 00:09:26.531 9.949 - 10.007: 60.7375% ( 72) 00:09:26.531 10.007 - 10.065: 61.4776% ( 58) 00:09:26.531 10.065 - 10.124: 62.1794% ( 55) 00:09:26.531 10.124 - 10.182: 62.7664% ( 46) 00:09:26.531 10.182 - 10.240: 63.3278% ( 44) 00:09:26.531 10.240 - 10.298: 63.8510% ( 41) 00:09:26.531 10.298 - 10.356: 64.2593% ( 32) 00:09:26.531 10.356 - 10.415: 64.7952% ( 42) 00:09:26.531 10.415 - 10.473: 65.0759% ( 22) 00:09:26.531 10.473 - 10.531: 65.3949% ( 25) 00:09:26.531 10.531 - 10.589: 65.6756% ( 22) 00:09:26.531 10.589 - 10.647: 65.8415% ( 13) 00:09:26.531 10.647 - 10.705: 66.0712% ( 18) 00:09:26.531 10.705 - 10.764: 66.1860% ( 9) 00:09:26.531 10.764 - 10.822: 66.2626% ( 6) 00:09:26.531 10.822 - 10.880: 66.3264% ( 5) 00:09:26.531 10.880 - 10.938: 66.4540% ( 10) 00:09:26.531 10.938 - 10.996: 66.5433% ( 7) 00:09:26.531 10.996 - 11.055: 66.5816% ( 3) 00:09:26.531 11.055 - 11.113: 66.6582% ( 6) 00:09:26.531 11.113 - 11.171: 66.6709% ( 1) 00:09:26.531 11.171 - 11.229: 66.7475% ( 6) 00:09:26.531 11.229 - 11.287: 66.8496% ( 8) 00:09:26.531 11.287 - 11.345: 66.9006% ( 4) 00:09:26.531 11.345 - 11.404: 66.9261% ( 2) 00:09:26.531 11.404 - 11.462: 67.0027% ( 6) 00:09:26.531 11.462 - 11.520: 67.1048% ( 8) 00:09:26.531 11.520 - 11.578: 67.2324% ( 10) 00:09:26.531 11.578 - 11.636: 67.7683% ( 42) 00:09:26.531 11.636 - 11.695: 69.8354% ( 162) 00:09:26.531 11.695 - 11.753: 73.5103% ( 288) 00:09:26.531 11.753 - 11.811: 77.4148% ( 306) 00:09:26.531 11.811 - 11.869: 80.8983% ( 273) 00:09:26.531 11.869 - 11.927: 82.4423% ( 121) 00:09:26.531 11.927 - 11.985: 83.2717% ( 65) 00:09:26.531 11.985 - 12.044: 83.6289% ( 28) 00:09:26.531 12.044 - 12.102: 83.8076% ( 14) 00:09:26.531 12.102 - 12.160: 83.9097% ( 8) 00:09:26.531 12.160 - 12.218: 84.0628% ( 12) 00:09:26.531 12.218 - 12.276: 84.3180% ( 20) 00:09:26.531 12.276 - 12.335: 84.5859% ( 21) 00:09:26.531 12.335 - 12.393: 84.8794% ( 23) 00:09:26.531 12.393 - 12.451: 85.0836% ( 16) 00:09:26.531 12.451 - 12.509: 85.1984% ( 9) 00:09:26.531 12.509 - 12.567: 85.3133% ( 9) 00:09:26.531 12.567 - 12.625: 85.4281% ( 9) 00:09:26.531 12.625 - 12.684: 85.6195% ( 15) 00:09:26.531 12.684 - 12.742: 85.9130% ( 23) 00:09:26.531 12.742 - 12.800: 86.3468% ( 34) 00:09:26.531 12.800 - 12.858: 86.7551% ( 32) 00:09:26.531 12.858 - 12.916: 86.9976% ( 19) 00:09:26.531 12.916 - 12.975: 87.1124% ( 9) 00:09:26.531 12.975 - 13.033: 87.2273% ( 9) 00:09:26.531 13.033 - 13.091: 87.3038% ( 6) 00:09:26.531 13.091 - 13.149: 87.3804% ( 6) 00:09:26.531 13.149 - 13.207: 87.4314% ( 4) 00:09:26.531 13.207 - 13.265: 87.4825% ( 4) 00:09:26.531 13.265 - 13.324: 87.4952% ( 1) 00:09:26.531 13.324 - 13.382: 87.5207% ( 2) 00:09:26.531 13.382 - 13.440: 87.5590% ( 3) 00:09:26.531 13.440 - 13.498: 87.5718% ( 1) 00:09:26.531 13.498 - 13.556: 87.5973% ( 2) 00:09:26.531 13.556 - 13.615: 87.6228% ( 2) 00:09:26.531 13.615 - 13.673: 87.6483% ( 2) 00:09:26.531 13.673 - 13.731: 87.6866% ( 3) 00:09:26.531 13.731 - 13.789: 87.6994% ( 1) 00:09:26.531 13.905 - 13.964: 87.7377% ( 3) 00:09:26.531 13.964 - 14.022: 87.7632% ( 2) 00:09:26.531 14.022 - 14.080: 87.7759% ( 1) 00:09:26.531 14.080 - 14.138: 87.8270% ( 4) 00:09:26.531 14.138 - 14.196: 87.8525% ( 2) 00:09:26.531 14.196 - 14.255: 87.9035% ( 4) 00:09:26.531 14.255 - 14.313: 87.9546% ( 4) 00:09:26.531 14.313 - 14.371: 88.0056% ( 4) 00:09:26.531 14.371 - 14.429: 88.0822% ( 6) 00:09:26.531 14.429 - 14.487: 88.1460% ( 5) 00:09:26.531 14.487 - 14.545: 88.1715% ( 2) 00:09:26.531 14.545 - 14.604: 88.2098% ( 3) 00:09:26.531 14.604 - 14.662: 88.2225% ( 1) 00:09:26.531 14.662 - 14.720: 88.2353% ( 1) 00:09:26.531 14.720 - 14.778: 88.2481% ( 1) 00:09:26.531 14.778 - 14.836: 88.2863% ( 3) 00:09:26.531 14.836 - 14.895: 88.3374% ( 4) 00:09:26.531 14.895 - 15.011: 88.3884% ( 4) 00:09:26.531 15.011 - 15.127: 88.4905% ( 8) 00:09:26.531 15.127 - 15.244: 88.5926% ( 8) 00:09:26.531 15.244 - 15.360: 88.7329% ( 11) 00:09:26.531 15.360 - 15.476: 88.7967% ( 5) 00:09:26.531 15.476 - 15.593: 88.8988% ( 8) 00:09:26.531 15.593 - 15.709: 89.0647% ( 13) 00:09:26.531 15.709 - 15.825: 89.1923% ( 10) 00:09:26.531 15.825 - 15.942: 89.3199% ( 10) 00:09:26.531 15.942 - 16.058: 89.4475% ( 10) 00:09:26.531 16.058 - 16.175: 89.6006% ( 12) 00:09:26.531 16.175 - 16.291: 89.6899% ( 7) 00:09:26.531 16.291 - 16.407: 89.7665% ( 6) 00:09:26.531 16.407 - 16.524: 89.8175% ( 4) 00:09:26.531 16.524 - 16.640: 89.8941% ( 6) 00:09:26.531 16.640 - 16.756: 89.9834% ( 7) 00:09:26.531 16.756 - 16.873: 90.0217% ( 3) 00:09:26.531 16.873 - 16.989: 90.0600% ( 3) 00:09:26.531 16.989 - 17.105: 90.1365% ( 6) 00:09:26.531 17.105 - 17.222: 90.1876% ( 4) 00:09:26.531 17.222 - 17.338: 90.2514% ( 5) 00:09:26.531 17.338 - 17.455: 90.3152% ( 5) 00:09:26.531 17.455 - 17.571: 90.3279% ( 1) 00:09:26.531 17.571 - 17.687: 90.3535% ( 2) 00:09:26.531 17.687 - 17.804: 90.4173% ( 5) 00:09:26.531 17.804 - 17.920: 90.4555% ( 3) 00:09:26.531 17.920 - 18.036: 90.4683% ( 1) 00:09:26.531 18.036 - 18.153: 90.5066% ( 3) 00:09:26.531 18.153 - 18.269: 90.5449% ( 3) 00:09:26.532 18.269 - 18.385: 90.6214% ( 6) 00:09:26.532 18.385 - 18.502: 90.6342% ( 1) 00:09:26.532 18.735 - 18.851: 90.6597% ( 2) 00:09:26.532 18.967 - 19.084: 90.6725% ( 1) 00:09:26.532 19.084 - 19.200: 90.6852% ( 1) 00:09:26.532 19.200 - 19.316: 90.7235% ( 3) 00:09:26.532 19.316 - 19.433: 90.7490% ( 2) 00:09:26.532 19.549 - 19.665: 90.7873% ( 3) 00:09:26.532 19.665 - 19.782: 90.8639% ( 6) 00:09:26.532 19.782 - 19.898: 90.8766% ( 1) 00:09:26.532 19.898 - 20.015: 90.8894% ( 1) 00:09:26.532 20.015 - 20.131: 90.9149% ( 2) 00:09:26.532 20.131 - 20.247: 90.9787% ( 5) 00:09:26.532 20.364 - 20.480: 91.0425% ( 5) 00:09:26.532 20.596 - 20.713: 91.0935% ( 4) 00:09:26.532 20.713 - 20.829: 91.1956% ( 8) 00:09:26.532 20.829 - 20.945: 91.2722% ( 6) 00:09:26.532 20.945 - 21.062: 91.2977% ( 2) 00:09:26.532 21.062 - 21.178: 91.3232% ( 2) 00:09:26.532 21.178 - 21.295: 91.3743% ( 4) 00:09:26.532 21.295 - 21.411: 91.3998% ( 2) 00:09:26.532 21.411 - 21.527: 91.4763% ( 6) 00:09:26.532 21.527 - 21.644: 91.5146% ( 3) 00:09:26.532 21.644 - 21.760: 91.5401% ( 2) 00:09:26.532 21.993 - 22.109: 91.5529% ( 1) 00:09:26.532 22.109 - 22.225: 91.5657% ( 1) 00:09:26.532 22.225 - 22.342: 91.5912% ( 2) 00:09:26.532 22.342 - 22.458: 91.6167% ( 2) 00:09:26.532 22.458 - 22.575: 91.6805% ( 5) 00:09:26.532 22.575 - 22.691: 91.7315% ( 4) 00:09:26.532 22.691 - 22.807: 91.7570% ( 2) 00:09:26.532 22.807 - 22.924: 91.7698% ( 1) 00:09:26.532 22.924 - 23.040: 91.7953% ( 2) 00:09:26.532 23.040 - 23.156: 91.8336% ( 3) 00:09:26.532 23.156 - 23.273: 91.8464% ( 1) 00:09:26.532 23.273 - 23.389: 91.9102% ( 5) 00:09:26.532 23.389 - 23.505: 92.0250% ( 9) 00:09:26.532 23.505 - 23.622: 92.1526% ( 10) 00:09:26.532 23.622 - 23.738: 92.4333% ( 22) 00:09:26.532 23.738 - 23.855: 92.8416% ( 32) 00:09:26.532 23.855 - 23.971: 93.3776% ( 42) 00:09:26.532 23.971 - 24.087: 93.8880% ( 40) 00:09:26.532 24.087 - 24.204: 94.5642% ( 53) 00:09:26.532 24.204 - 24.320: 95.5212% ( 75) 00:09:26.532 24.320 - 24.436: 96.2358% ( 56) 00:09:26.532 24.436 - 24.553: 96.9248% ( 54) 00:09:26.532 24.553 - 24.669: 97.3332% ( 32) 00:09:26.532 24.669 - 24.785: 97.7798% ( 35) 00:09:26.532 24.785 - 24.902: 98.0094% ( 18) 00:09:26.532 24.902 - 25.018: 98.2519% ( 19) 00:09:26.532 25.018 - 25.135: 98.4688% ( 17) 00:09:26.532 25.135 - 25.251: 98.6219% ( 12) 00:09:26.532 25.251 - 25.367: 98.6985% ( 6) 00:09:26.532 25.367 - 25.484: 98.7750% ( 6) 00:09:26.532 25.484 - 25.600: 98.8516% ( 6) 00:09:26.532 25.600 - 25.716: 98.8644% ( 1) 00:09:26.532 25.833 - 25.949: 98.9026% ( 3) 00:09:26.532 25.949 - 26.065: 98.9282% ( 2) 00:09:26.532 26.065 - 26.182: 98.9409% ( 1) 00:09:26.532 26.415 - 26.531: 98.9537% ( 1) 00:09:26.532 26.647 - 26.764: 98.9664% ( 1) 00:09:26.532 26.764 - 26.880: 98.9792% ( 1) 00:09:26.532 26.880 - 26.996: 98.9920% ( 1) 00:09:26.532 26.996 - 27.113: 99.0047% ( 1) 00:09:26.532 27.113 - 27.229: 99.0430% ( 3) 00:09:26.532 27.462 - 27.578: 99.0558% ( 1) 00:09:26.532 27.578 - 27.695: 99.0685% ( 1) 00:09:26.532 27.811 - 27.927: 99.0813% ( 1) 00:09:26.532 28.044 - 28.160: 99.0940% ( 1) 00:09:26.532 28.393 - 28.509: 99.1068% ( 1) 00:09:26.532 28.742 - 28.858: 99.1196% ( 1) 00:09:26.532 28.975 - 29.091: 99.1323% ( 1) 00:09:26.532 29.091 - 29.207: 99.1451% ( 1) 00:09:26.532 29.324 - 29.440: 99.1578% ( 1) 00:09:26.532 29.556 - 29.673: 99.1706% ( 1) 00:09:26.532 29.673 - 29.789: 99.2089% ( 3) 00:09:26.532 29.789 - 30.022: 99.2344% ( 2) 00:09:26.532 30.022 - 30.255: 99.2982% ( 5) 00:09:26.532 30.255 - 30.487: 99.4003% ( 8) 00:09:26.532 30.487 - 30.720: 99.4896% ( 7) 00:09:26.532 30.720 - 30.953: 99.5151% ( 2) 00:09:26.532 30.953 - 31.185: 99.5789% ( 5) 00:09:26.532 31.418 - 31.651: 99.5917% ( 1) 00:09:26.532 31.651 - 31.884: 99.6044% ( 1) 00:09:26.532 32.116 - 32.349: 99.6427% ( 3) 00:09:26.532 32.349 - 32.582: 99.6555% ( 1) 00:09:26.532 32.582 - 32.815: 99.6682% ( 1) 00:09:26.532 32.815 - 33.047: 99.7193% ( 4) 00:09:26.532 33.047 - 33.280: 99.7320% ( 1) 00:09:26.532 33.280 - 33.513: 99.7576% ( 2) 00:09:26.532 33.978 - 34.211: 99.7703% ( 1) 00:09:26.532 34.211 - 34.444: 99.8086% ( 3) 00:09:26.532 34.676 - 34.909: 99.8214% ( 1) 00:09:26.532 35.375 - 35.607: 99.8341% ( 1) 00:09:26.532 35.607 - 35.840: 99.8469% ( 1) 00:09:26.532 36.073 - 36.305: 99.8596% ( 1) 00:09:26.532 38.633 - 38.865: 99.8724% ( 1) 00:09:26.532 40.495 - 40.727: 99.8852% ( 1) 00:09:26.532 40.727 - 40.960: 99.8979% ( 1) 00:09:26.532 41.425 - 41.658: 99.9107% ( 1) 00:09:26.532 41.891 - 42.124: 99.9234% ( 1) 00:09:26.532 42.124 - 42.356: 99.9362% ( 1) 00:09:26.532 42.589 - 42.822: 99.9490% ( 1) 00:09:26.532 44.451 - 44.684: 99.9617% ( 1) 00:09:26.532 57.018 - 57.251: 99.9745% ( 1) 00:09:26.532 58.880 - 59.113: 99.9872% ( 1) 00:09:26.532 510.138 - 513.862: 100.0000% ( 1) 00:09:26.532 00:09:26.532 ************************************ 00:09:26.532 END TEST nvme_overhead 00:09:26.532 ************************************ 00:09:26.532 00:09:26.532 real 0m1.349s 00:09:26.532 user 0m1.139s 00:09:26.532 sys 0m0.157s 00:09:26.532 11:21:09 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.532 11:21:09 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:26.532 11:21:09 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:26.532 11:21:09 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:26.532 11:21:09 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.532 11:21:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:26.532 ************************************ 00:09:26.532 START TEST nvme_arbitration 00:09:26.532 ************************************ 00:09:26.532 11:21:09 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:29.814 Initializing NVMe Controllers 00:09:29.814 Attached to 0000:00:10.0 00:09:29.814 Attached to 0000:00:11.0 00:09:29.814 Attached to 0000:00:13.0 00:09:29.814 Attached to 0000:00:12.0 00:09:29.814 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:29.814 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:29.814 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:29.814 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:29.814 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:29.814 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:29.814 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:29.814 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:29.814 Initialization complete. Launching workers. 00:09:29.814 Starting thread on core 1 with urgent priority queue 00:09:29.814 Starting thread on core 2 with urgent priority queue 00:09:29.814 Starting thread on core 3 with urgent priority queue 00:09:29.814 Starting thread on core 0 with urgent priority queue 00:09:29.814 QEMU NVMe Ctrl (12340 ) core 0: 725.33 IO/s 137.87 secs/100000 ios 00:09:29.814 QEMU NVMe Ctrl (12342 ) core 0: 725.33 IO/s 137.87 secs/100000 ios 00:09:29.814 QEMU NVMe Ctrl (12341 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:09:29.814 QEMU NVMe Ctrl (12342 ) core 1: 682.67 IO/s 146.48 secs/100000 ios 00:09:29.814 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:09:29.814 QEMU NVMe Ctrl (12342 ) core 3: 576.00 IO/s 173.61 secs/100000 ios 00:09:29.814 ======================================================== 00:09:29.814 00:09:29.814 ************************************ 00:09:29.814 END TEST nvme_arbitration 00:09:29.814 ************************************ 00:09:29.814 00:09:29.814 real 0m3.425s 00:09:29.814 user 0m9.314s 00:09:29.814 sys 0m0.168s 00:09:29.814 11:21:12 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.814 11:21:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:29.814 11:21:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:29.814 11:21:12 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:29.814 11:21:12 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.814 11:21:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.814 ************************************ 00:09:29.814 START TEST nvme_single_aen 00:09:29.814 ************************************ 00:09:29.814 11:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:30.072 Asynchronous Event Request test 00:09:30.072 Attached to 0000:00:10.0 00:09:30.072 Attached to 0000:00:11.0 00:09:30.072 Attached to 0000:00:13.0 00:09:30.073 Attached to 0000:00:12.0 00:09:30.073 Reset controller to setup AER completions for this process 00:09:30.073 Registering asynchronous event callbacks... 00:09:30.073 Getting orig temperature thresholds of all controllers 00:09:30.073 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:30.073 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:30.073 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:30.073 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:30.073 Setting all controllers temperature threshold low to trigger AER 00:09:30.073 Waiting for all controllers temperature threshold to be set lower 00:09:30.073 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:30.073 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:30.073 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:30.073 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:30.073 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:30.073 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:30.073 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:30.073 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:30.073 Waiting for all controllers to trigger AER and reset threshold 00:09:30.073 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.073 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.073 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.073 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:30.073 Cleaning up... 00:09:30.073 00:09:30.073 real 0m0.297s 00:09:30.073 user 0m0.103s 00:09:30.073 sys 0m0.147s 00:09:30.073 ************************************ 00:09:30.073 END TEST nvme_single_aen 00:09:30.073 ************************************ 00:09:30.073 11:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:30.073 11:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:30.073 11:21:13 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:30.073 11:21:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:30.073 11:21:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:30.073 11:21:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.331 ************************************ 00:09:30.331 START TEST nvme_doorbell_aers 00:09:30.331 ************************************ 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:30.331 11:21:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:30.589 [2024-11-15 11:21:13.419552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:09:40.609 Executing: test_write_invalid_db 00:09:40.609 Waiting for AER completion... 00:09:40.609 Failure: test_write_invalid_db 00:09:40.609 00:09:40.609 Executing: test_invalid_db_write_overflow_sq 00:09:40.609 Waiting for AER completion... 00:09:40.609 Failure: test_invalid_db_write_overflow_sq 00:09:40.609 00:09:40.609 Executing: test_invalid_db_write_overflow_cq 00:09:40.609 Waiting for AER completion... 00:09:40.609 Failure: test_invalid_db_write_overflow_cq 00:09:40.609 00:09:40.609 11:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:40.609 11:21:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:40.609 [2024-11-15 11:21:23.466264] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:09:50.571 Executing: test_write_invalid_db 00:09:50.571 Waiting for AER completion... 00:09:50.571 Failure: test_write_invalid_db 00:09:50.571 00:09:50.571 Executing: test_invalid_db_write_overflow_sq 00:09:50.571 Waiting for AER completion... 00:09:50.571 Failure: test_invalid_db_write_overflow_sq 00:09:50.571 00:09:50.571 Executing: test_invalid_db_write_overflow_cq 00:09:50.571 Waiting for AER completion... 00:09:50.571 Failure: test_invalid_db_write_overflow_cq 00:09:50.571 00:09:50.571 11:21:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:50.571 11:21:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:50.571 [2024-11-15 11:21:33.511820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:00.573 Executing: test_write_invalid_db 00:10:00.573 Waiting for AER completion... 00:10:00.573 Failure: test_write_invalid_db 00:10:00.573 00:10:00.573 Executing: test_invalid_db_write_overflow_sq 00:10:00.573 Waiting for AER completion... 00:10:00.573 Failure: test_invalid_db_write_overflow_sq 00:10:00.573 00:10:00.573 Executing: test_invalid_db_write_overflow_cq 00:10:00.573 Waiting for AER completion... 00:10:00.573 Failure: test_invalid_db_write_overflow_cq 00:10:00.573 00:10:00.573 11:21:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:00.573 11:21:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:00.830 [2024-11-15 11:21:43.559920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.869 Executing: test_write_invalid_db 00:10:10.869 Waiting for AER completion... 00:10:10.869 Failure: test_write_invalid_db 00:10:10.869 00:10:10.869 Executing: test_invalid_db_write_overflow_sq 00:10:10.869 Waiting for AER completion... 00:10:10.869 Failure: test_invalid_db_write_overflow_sq 00:10:10.869 00:10:10.869 Executing: test_invalid_db_write_overflow_cq 00:10:10.869 Waiting for AER completion... 00:10:10.869 Failure: test_invalid_db_write_overflow_cq 00:10:10.869 00:10:10.869 ************************************ 00:10:10.869 END TEST nvme_doorbell_aers 00:10:10.869 ************************************ 00:10:10.869 00:10:10.869 real 0m40.255s 00:10:10.869 user 0m34.081s 00:10:10.869 sys 0m5.777s 00:10:10.869 11:21:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.869 11:21:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:10.869 11:21:53 nvme -- nvme/nvme.sh@97 -- # uname 00:10:10.869 11:21:53 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:10.869 11:21:53 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:10.869 11:21:53 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:10.869 11:21:53 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.869 11:21:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.869 ************************************ 00:10:10.869 START TEST nvme_multi_aen 00:10:10.869 ************************************ 00:10:10.870 11:21:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:10.870 [2024-11-15 11:21:53.621940] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.622318] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.622346] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.624200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.624264] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.624285] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.625806] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.625867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.625885] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.627418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.627621] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 [2024-11-15 11:21:53.627645] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64671) is not found. Dropping the request. 00:10:10.870 Child process pid: 65192 00:10:11.128 [Child] Asynchronous Event Request test 00:10:11.128 [Child] Attached to 0000:00:10.0 00:10:11.128 [Child] Attached to 0000:00:11.0 00:10:11.128 [Child] Attached to 0000:00:13.0 00:10:11.128 [Child] Attached to 0000:00:12.0 00:10:11.128 [Child] Registering asynchronous event callbacks... 00:10:11.128 [Child] Getting orig temperature thresholds of all controllers 00:10:11.128 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.128 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.128 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.128 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.128 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:11.128 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.128 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.128 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.128 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.128 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.128 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.128 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.128 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.128 [Child] Cleaning up... 00:10:11.129 Asynchronous Event Request test 00:10:11.129 Attached to 0000:00:10.0 00:10:11.129 Attached to 0000:00:11.0 00:10:11.129 Attached to 0000:00:13.0 00:10:11.129 Attached to 0000:00:12.0 00:10:11.129 Reset controller to setup AER completions for this process 00:10:11.129 Registering asynchronous event callbacks... 00:10:11.129 Getting orig temperature thresholds of all controllers 00:10:11.129 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.129 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.129 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.129 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:11.129 Setting all controllers temperature threshold low to trigger AER 00:10:11.129 Waiting for all controllers temperature threshold to be set lower 00:10:11.129 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.129 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:11.129 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.129 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:11.129 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.129 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:11.129 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:11.129 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:11.129 Waiting for all controllers to trigger AER and reset threshold 00:10:11.129 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.129 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.129 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.129 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:11.129 Cleaning up... 00:10:11.129 00:10:11.129 real 0m0.659s 00:10:11.129 user 0m0.232s 00:10:11.129 sys 0m0.323s 00:10:11.129 11:21:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.129 11:21:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:11.129 ************************************ 00:10:11.129 END TEST nvme_multi_aen 00:10:11.129 ************************************ 00:10:11.129 11:21:54 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:11.129 11:21:54 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:11.129 11:21:54 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.129 11:21:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.129 ************************************ 00:10:11.129 START TEST nvme_startup 00:10:11.129 ************************************ 00:10:11.129 11:21:54 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:11.694 Initializing NVMe Controllers 00:10:11.694 Attached to 0000:00:10.0 00:10:11.694 Attached to 0000:00:11.0 00:10:11.694 Attached to 0000:00:13.0 00:10:11.694 Attached to 0000:00:12.0 00:10:11.694 Initialization complete. 00:10:11.694 Time used:232250.641 (us). 00:10:11.694 00:10:11.694 real 0m0.324s 00:10:11.694 user 0m0.108s 00:10:11.694 sys 0m0.162s 00:10:11.694 11:21:54 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.694 11:21:54 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:11.694 ************************************ 00:10:11.694 END TEST nvme_startup 00:10:11.694 ************************************ 00:10:11.694 11:21:54 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:11.694 11:21:54 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:11.694 11:21:54 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.694 11:21:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.694 ************************************ 00:10:11.694 START TEST nvme_multi_secondary 00:10:11.694 ************************************ 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65248 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65249 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:11.694 11:21:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:14.974 Initializing NVMe Controllers 00:10:14.974 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:14.974 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:14.974 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:14.974 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:14.974 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:14.974 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:14.974 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:14.974 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:14.974 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:14.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:14.975 Initialization complete. Launching workers. 00:10:14.975 ======================================================== 00:10:14.975 Latency(us) 00:10:14.975 Device Information : IOPS MiB/s Average min max 00:10:14.975 PCIE (0000:00:10.0) NSID 1 from core 1: 5074.23 19.82 3151.23 1215.43 6248.63 00:10:14.975 PCIE (0000:00:11.0) NSID 1 from core 1: 5074.23 19.82 3152.65 1260.46 5964.00 00:10:14.975 PCIE (0000:00:13.0) NSID 1 from core 1: 5074.23 19.82 3152.66 1286.45 5350.16 00:10:14.975 PCIE (0000:00:12.0) NSID 1 from core 1: 5074.23 19.82 3152.70 1291.94 5860.37 00:10:14.975 PCIE (0000:00:12.0) NSID 2 from core 1: 5074.23 19.82 3152.72 1289.23 5753.78 00:10:14.975 PCIE (0000:00:12.0) NSID 3 from core 1: 5074.23 19.82 3152.62 1286.67 5911.79 00:10:14.975 ======================================================== 00:10:14.975 Total : 30445.40 118.93 3152.43 1215.43 6248.63 00:10:14.975 00:10:15.232 Initializing NVMe Controllers 00:10:15.232 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:15.232 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:15.232 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:15.232 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:15.232 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:15.232 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:15.232 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:15.232 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:15.232 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:15.232 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:15.232 Initialization complete. Launching workers. 00:10:15.232 ======================================================== 00:10:15.232 Latency(us) 00:10:15.232 Device Information : IOPS MiB/s Average min max 00:10:15.232 PCIE (0000:00:10.0) NSID 1 from core 2: 2510.65 9.81 6369.42 1647.17 13545.21 00:10:15.232 PCIE (0000:00:11.0) NSID 1 from core 2: 2510.65 9.81 6372.12 1505.05 12954.57 00:10:15.232 PCIE (0000:00:13.0) NSID 1 from core 2: 2510.65 9.81 6371.93 1604.22 16331.35 00:10:15.232 PCIE (0000:00:12.0) NSID 1 from core 2: 2510.65 9.81 6371.40 1673.16 12781.51 00:10:15.232 PCIE (0000:00:12.0) NSID 2 from core 2: 2510.65 9.81 6371.04 1215.43 12255.80 00:10:15.232 PCIE (0000:00:12.0) NSID 3 from core 2: 2510.65 9.81 6371.40 1072.18 12698.29 00:10:15.232 ======================================================== 00:10:15.232 Total : 15063.90 58.84 6371.22 1072.18 16331.35 00:10:15.232 00:10:15.232 11:21:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65248 00:10:17.131 Initializing NVMe Controllers 00:10:17.131 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:17.131 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:17.131 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:17.131 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:17.131 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:17.131 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:17.131 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:17.131 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:17.131 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:17.131 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:17.131 Initialization complete. Launching workers. 00:10:17.131 ======================================================== 00:10:17.131 Latency(us) 00:10:17.131 Device Information : IOPS MiB/s Average min max 00:10:17.131 PCIE (0000:00:10.0) NSID 1 from core 0: 7826.60 30.57 2042.76 987.82 8352.18 00:10:17.131 PCIE (0000:00:11.0) NSID 1 from core 0: 7826.60 30.57 2043.85 1021.50 8159.73 00:10:17.131 PCIE (0000:00:13.0) NSID 1 from core 0: 7826.60 30.57 2043.81 1017.61 8419.82 00:10:17.131 PCIE (0000:00:12.0) NSID 1 from core 0: 7826.60 30.57 2043.77 972.18 8433.27 00:10:17.131 PCIE (0000:00:12.0) NSID 2 from core 0: 7826.60 30.57 2043.72 858.08 9184.66 00:10:17.131 PCIE (0000:00:12.0) NSID 3 from core 0: 7826.60 30.57 2043.68 790.99 9269.15 00:10:17.131 ======================================================== 00:10:17.131 Total : 46959.57 183.44 2043.60 790.99 9269.15 00:10:17.131 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65249 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65318 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65319 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:17.131 11:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:20.411 Initializing NVMe Controllers 00:10:20.411 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:20.411 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:20.411 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:20.411 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:20.411 Initialization complete. Launching workers. 00:10:20.411 ======================================================== 00:10:20.411 Latency(us) 00:10:20.411 Device Information : IOPS MiB/s Average min max 00:10:20.411 PCIE (0000:00:10.0) NSID 1 from core 0: 5332.01 20.83 2998.99 1039.83 8492.32 00:10:20.411 PCIE (0000:00:11.0) NSID 1 from core 0: 5332.01 20.83 3000.53 1050.22 8890.18 00:10:20.411 PCIE (0000:00:13.0) NSID 1 from core 0: 5332.01 20.83 3000.64 1035.51 9095.02 00:10:20.411 PCIE (0000:00:12.0) NSID 1 from core 0: 5332.01 20.83 3000.95 1034.55 9418.68 00:10:20.411 PCIE (0000:00:12.0) NSID 2 from core 0: 5332.01 20.83 3001.14 1041.98 9457.89 00:10:20.411 PCIE (0000:00:12.0) NSID 3 from core 0: 5332.01 20.83 3001.18 1048.66 9651.70 00:10:20.411 ======================================================== 00:10:20.411 Total : 31992.09 124.97 3000.57 1034.55 9651.70 00:10:20.411 00:10:20.411 Initializing NVMe Controllers 00:10:20.411 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:20.411 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:20.411 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:20.411 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:20.411 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:20.411 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:20.411 Initialization complete. Launching workers. 00:10:20.411 ======================================================== 00:10:20.411 Latency(us) 00:10:20.411 Device Information : IOPS MiB/s Average min max 00:10:20.411 PCIE (0000:00:10.0) NSID 1 from core 1: 4968.04 19.41 3218.64 1047.20 6465.68 00:10:20.411 PCIE (0000:00:11.0) NSID 1 from core 1: 4968.04 19.41 3220.04 1048.35 6241.02 00:10:20.411 PCIE (0000:00:13.0) NSID 1 from core 1: 4968.04 19.41 3219.92 1122.66 6351.22 00:10:20.411 PCIE (0000:00:12.0) NSID 1 from core 1: 4968.04 19.41 3219.81 1103.89 6208.15 00:10:20.411 PCIE (0000:00:12.0) NSID 2 from core 1: 4968.04 19.41 3219.68 1076.30 6350.21 00:10:20.411 PCIE (0000:00:12.0) NSID 3 from core 1: 4968.04 19.41 3219.68 1071.61 6215.76 00:10:20.411 ======================================================== 00:10:20.411 Total : 29808.26 116.44 3219.63 1047.20 6465.68 00:10:20.411 00:10:22.308 Initializing NVMe Controllers 00:10:22.308 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:22.308 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:22.308 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:22.308 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:22.308 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:22.308 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:22.308 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:22.308 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:22.308 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:22.308 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:22.308 Initialization complete. Launching workers. 00:10:22.308 ======================================================== 00:10:22.308 Latency(us) 00:10:22.308 Device Information : IOPS MiB/s Average min max 00:10:22.308 PCIE (0000:00:10.0) NSID 1 from core 2: 3615.61 14.12 4421.97 1009.41 12377.79 00:10:22.308 PCIE (0000:00:11.0) NSID 1 from core 2: 3615.61 14.12 4424.84 1050.85 13443.99 00:10:22.308 PCIE (0000:00:13.0) NSID 1 from core 2: 3615.61 14.12 4424.78 1032.99 13322.66 00:10:22.308 PCIE (0000:00:12.0) NSID 1 from core 2: 3615.61 14.12 4424.68 1021.58 13108.12 00:10:22.308 PCIE (0000:00:12.0) NSID 2 from core 2: 3615.61 14.12 4424.60 1019.17 12849.24 00:10:22.308 PCIE (0000:00:12.0) NSID 3 from core 2: 3615.61 14.12 4424.27 938.51 13032.87 00:10:22.308 ======================================================== 00:10:22.308 Total : 21693.65 84.74 4424.19 938.51 13443.99 00:10:22.308 00:10:22.308 ************************************ 00:10:22.308 END TEST nvme_multi_secondary 00:10:22.308 ************************************ 00:10:22.308 11:22:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65318 00:10:22.308 11:22:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65319 00:10:22.308 00:10:22.308 real 0m10.729s 00:10:22.308 user 0m18.646s 00:10:22.308 sys 0m1.038s 00:10:22.308 11:22:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.308 11:22:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:22.308 11:22:05 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:22.308 11:22:05 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:22.308 11:22:05 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64245 ]] 00:10:22.308 11:22:05 nvme -- common/autotest_common.sh@1092 -- # kill 64245 00:10:22.308 11:22:05 nvme -- common/autotest_common.sh@1093 -- # wait 64245 00:10:22.308 [2024-11-15 11:22:05.185527] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.185601] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.185631] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.185648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.187890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.188131] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.188282] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.188548] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.190625] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.190686] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.190719] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.190733] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.192855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.192923] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.192973] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.308 [2024-11-15 11:22:05.192987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65191) is not found. Dropping the request. 00:10:22.569 11:22:05 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:10:22.569 11:22:05 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:10:22.569 11:22:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:22.569 11:22:05 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:22.569 11:22:05 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.569 11:22:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.569 ************************************ 00:10:22.569 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:22.569 ************************************ 00:10:22.569 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:22.569 * Looking for test storage... 00:10:22.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.569 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.569 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.569 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.845 --rc genhtml_branch_coverage=1 00:10:22.845 --rc genhtml_function_coverage=1 00:10:22.845 --rc genhtml_legend=1 00:10:22.845 --rc geninfo_all_blocks=1 00:10:22.845 --rc geninfo_unexecuted_blocks=1 00:10:22.845 00:10:22.845 ' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.845 --rc genhtml_branch_coverage=1 00:10:22.845 --rc genhtml_function_coverage=1 00:10:22.845 --rc genhtml_legend=1 00:10:22.845 --rc geninfo_all_blocks=1 00:10:22.845 --rc geninfo_unexecuted_blocks=1 00:10:22.845 00:10:22.845 ' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.845 --rc genhtml_branch_coverage=1 00:10:22.845 --rc genhtml_function_coverage=1 00:10:22.845 --rc genhtml_legend=1 00:10:22.845 --rc geninfo_all_blocks=1 00:10:22.845 --rc geninfo_unexecuted_blocks=1 00:10:22.845 00:10:22.845 ' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.845 --rc genhtml_branch_coverage=1 00:10:22.845 --rc genhtml_function_coverage=1 00:10:22.845 --rc genhtml_legend=1 00:10:22.845 --rc geninfo_all_blocks=1 00:10:22.845 --rc geninfo_unexecuted_blocks=1 00:10:22.845 00:10:22.845 ' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:22.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65481 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65481 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65481 ']' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:22.845 11:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:22.845 [2024-11-15 11:22:05.763556] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:10:22.845 [2024-11-15 11:22:05.764075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65481 ] 00:10:23.112 [2024-11-15 11:22:05.965539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.370 [2024-11-15 11:22:06.092362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.370 [2024-11-15 11:22:06.092409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.370 [2024-11-15 11:22:06.092518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.370 [2024-11-15 11:22:06.092518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.303 11:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.303 11:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:10:24.303 11:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:24.303 11:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.303 11:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.303 nvme0n1 00:10:24.303 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.303 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:24.303 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Iyt3W.txt 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:24.304 true 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731669727 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65504 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:24.304 11:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.221 [2024-11-15 11:22:09.076546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:26.221 [2024-11-15 11:22:09.076998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:26.221 [2024-11-15 11:22:09.077069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:26.221 [2024-11-15 11:22:09.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:26.221 [2024-11-15 11:22:09.079129] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:26.221 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65504 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65504 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65504 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:26.221 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Iyt3W.txt 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Iyt3W.txt 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65481 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65481 ']' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65481 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65481 00:10:26.477 killing process with pid 65481 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65481' 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65481 00:10:26.477 11:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65481 00:10:28.370 11:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:28.370 11:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:28.370 00:10:28.370 real 0m5.934s 00:10:28.370 user 0m20.795s 00:10:28.370 sys 0m0.797s 00:10:28.370 11:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.370 ************************************ 00:10:28.370 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:28.370 ************************************ 00:10:28.370 11:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 11:22:11 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:28.629 11:22:11 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:28.629 11:22:11 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.629 11:22:11 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.629 11:22:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:28.629 ************************************ 00:10:28.629 START TEST nvme_fio 00:10:28.629 ************************************ 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:28.629 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:28.629 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:28.887 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:28.887 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:29.146 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:29.146 11:22:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:29.146 11:22:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:29.146 11:22:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:29.146 11:22:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:29.146 11:22:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:29.146 11:22:12 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:29.146 11:22:12 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:29.404 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:29.404 fio-3.35 00:10:29.404 Starting 1 thread 00:10:32.709 00:10:32.709 test: (groupid=0, jobs=1): err= 0: pid=65649: Fri Nov 15 11:22:15 2024 00:10:32.709 read: IOPS=15.3k, BW=59.7MiB/s (62.6MB/s)(119MiB/2001msec) 00:10:32.709 slat (nsec): min=3811, max=79348, avg=6339.96, stdev=3556.55 00:10:32.709 clat (usec): min=228, max=7255, avg=4167.46, stdev=402.54 00:10:32.709 lat (usec): min=233, max=7291, avg=4173.80, stdev=402.77 00:10:32.709 clat percentiles (usec): 00:10:32.709 | 1.00th=[ 3392], 5.00th=[ 3556], 10.00th=[ 3687], 20.00th=[ 3818], 00:10:32.709 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:10:32.709 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4817], 00:10:32.709 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5407], 00:10:32.709 | 99.99th=[ 7046] 00:10:32.709 bw ( KiB/s): min=59264, max=65016, per=100.00%, avg=62177.00, stdev=2876.71, samples=3 00:10:32.709 iops : min=14816, max=16254, avg=15544.00, stdev=719.17, samples=3 00:10:32.709 write: IOPS=15.3k, BW=59.8MiB/s (62.7MB/s)(120MiB/2001msec); 0 zone resets 00:10:32.709 slat (nsec): min=3894, max=72116, avg=6667.04, stdev=3655.64 00:10:32.710 clat (usec): min=291, max=7090, avg=4179.43, stdev=399.53 00:10:32.710 lat (usec): min=297, max=7101, avg=4186.10, stdev=399.80 00:10:32.710 clat percentiles (usec): 00:10:32.710 | 1.00th=[ 3425], 5.00th=[ 3589], 10.00th=[ 3687], 20.00th=[ 3818], 00:10:32.710 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4293], 00:10:32.710 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4686], 95.00th=[ 4817], 00:10:32.710 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 6259], 00:10:32.710 | 99.99th=[ 6980] 00:10:32.710 bw ( KiB/s): min=58696, max=63864, per=100.00%, avg=61736.67, stdev=2702.35, samples=3 00:10:32.710 iops : min=14674, max=15966, avg=15434.00, stdev=675.50, samples=3 00:10:32.710 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:32.710 lat (msec) : 2=0.05%, 4=35.08%, 10=64.84% 00:10:32.710 cpu : usr=98.85%, sys=0.05%, ctx=6, majf=0, minf=607 00:10:32.710 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:32.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.710 issued rwts: total=30573,30616,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.710 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.710 00:10:32.710 Run status group 0 (all jobs): 00:10:32.710 READ: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=119MiB (125MB), run=2001-2001msec 00:10:32.710 WRITE: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=120MiB (125MB), run=2001-2001msec 00:10:32.710 ----------------------------------------------------- 00:10:32.710 Suppressions used: 00:10:32.710 count bytes template 00:10:32.710 1 32 /usr/src/fio/parse.c 00:10:32.710 1 8 libtcmalloc_minimal.so 00:10:32.710 ----------------------------------------------------- 00:10:32.710 00:10:32.710 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:32.710 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:32.710 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:32.710 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:32.969 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:32.969 11:22:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:33.227 11:22:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:33.227 11:22:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:33.227 11:22:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:33.485 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:33.485 fio-3.35 00:10:33.485 Starting 1 thread 00:10:36.796 00:10:36.796 test: (groupid=0, jobs=1): err= 0: pid=65715: Fri Nov 15 11:22:19 2024 00:10:36.796 read: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec) 00:10:36.796 slat (nsec): min=4423, max=62210, avg=6785.42, stdev=3497.75 00:10:36.796 clat (usec): min=264, max=10191, avg=4385.95, stdev=502.83 00:10:36.796 lat (usec): min=270, max=10247, avg=4392.74, stdev=503.39 00:10:36.796 clat percentiles (usec): 00:10:36.796 | 1.00th=[ 3589], 5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3949], 00:10:36.796 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4490], 00:10:36.796 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5211], 00:10:36.796 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7046], 99.95th=[ 8717], 00:10:36.796 | 99.99th=[10159] 00:10:36.796 bw ( KiB/s): min=51672, max=60248, per=97.22%, avg=56429.33, stdev=4364.37, samples=3 00:10:36.796 iops : min=12918, max=15062, avg=14107.33, stdev=1091.09, samples=3 00:10:36.796 write: IOPS=14.5k, BW=56.8MiB/s (59.5MB/s)(114MiB/2001msec); 0 zone resets 00:10:36.796 slat (usec): min=4, max=264, avg= 7.11, stdev= 4.13 00:10:36.796 clat (usec): min=298, max=10045, avg=4394.02, stdev=506.21 00:10:36.796 lat (usec): min=304, max=10064, avg=4401.13, stdev=506.73 00:10:36.796 clat percentiles (usec): 00:10:36.796 | 1.00th=[ 3589], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 3982], 00:10:36.796 | 30.00th=[ 4080], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4490], 00:10:36.796 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5014], 95.00th=[ 5211], 00:10:36.796 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 7570], 99.95th=[ 8848], 00:10:36.796 | 99.99th=[ 9896] 00:10:36.796 bw ( KiB/s): min=51864, max=60280, per=97.06%, avg=56416.00, stdev=4249.97, samples=3 00:10:36.796 iops : min=12966, max=15070, avg=14104.00, stdev=1062.49, samples=3 00:10:36.796 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:10:36.796 lat (msec) : 2=0.06%, 4=23.23%, 10=76.65%, 20=0.01% 00:10:36.796 cpu : usr=97.95%, sys=0.60%, ctx=24, majf=0, minf=608 00:10:36.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:36.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.796 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.796 issued rwts: total=29036,29076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.796 00:10:36.796 Run status group 0 (all jobs): 00:10:36.796 READ: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:10:36.796 WRITE: bw=56.8MiB/s (59.5MB/s), 56.8MiB/s-56.8MiB/s (59.5MB/s-59.5MB/s), io=114MiB (119MB), run=2001-2001msec 00:10:36.796 ----------------------------------------------------- 00:10:36.796 Suppressions used: 00:10:36.796 count bytes template 00:10:36.796 1 32 /usr/src/fio/parse.c 00:10:36.796 1 8 libtcmalloc_minimal.so 00:10:36.796 ----------------------------------------------------- 00:10:36.796 00:10:36.796 11:22:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:36.796 11:22:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:36.797 11:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:36.797 11:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:37.363 11:22:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:37.363 11:22:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:37.621 11:22:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:37.621 11:22:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:37.621 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:37.622 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:37.622 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:37.622 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:37.622 11:22:20 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:37.622 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:37.622 fio-3.35 00:10:37.622 Starting 1 thread 00:10:40.917 00:10:40.917 test: (groupid=0, jobs=1): err= 0: pid=65780: Fri Nov 15 11:22:23 2024 00:10:40.917 read: IOPS=17.3k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec) 00:10:40.917 slat (nsec): min=4275, max=79220, avg=5926.84, stdev=2467.47 00:10:40.917 clat (usec): min=267, max=11027, avg=3664.14, stdev=414.62 00:10:40.917 lat (usec): min=273, max=11071, avg=3670.07, stdev=415.24 00:10:40.917 clat percentiles (usec): 00:10:40.917 | 1.00th=[ 3163], 5.00th=[ 3294], 10.00th=[ 3326], 20.00th=[ 3425], 00:10:40.917 | 30.00th=[ 3458], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3621], 00:10:40.917 | 70.00th=[ 3687], 80.00th=[ 3818], 90.00th=[ 4080], 95.00th=[ 4621], 00:10:40.917 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 7046], 99.95th=[ 9241], 00:10:40.917 | 99.99th=[10945] 00:10:40.917 bw ( KiB/s): min=63632, max=71664, per=99.14%, avg=68792.00, stdev=4478.22, samples=3 00:10:40.917 iops : min=15908, max=17916, avg=17198.00, stdev=1119.56, samples=3 00:10:40.917 write: IOPS=17.4k, BW=67.8MiB/s (71.1MB/s)(136MiB/2001msec); 0 zone resets 00:10:40.917 slat (nsec): min=4344, max=85347, avg=6160.75, stdev=2614.47 00:10:40.917 clat (usec): min=258, max=10913, avg=3681.35, stdev=421.86 00:10:40.917 lat (usec): min=263, max=10930, avg=3687.51, stdev=422.49 00:10:40.917 clat percentiles (usec): 00:10:40.917 | 1.00th=[ 3195], 5.00th=[ 3294], 10.00th=[ 3359], 20.00th=[ 3425], 00:10:40.917 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:10:40.917 | 70.00th=[ 3720], 80.00th=[ 3818], 90.00th=[ 4113], 95.00th=[ 4621], 00:10:40.917 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 7767], 99.95th=[ 9503], 00:10:40.917 | 99.99th=[10683] 00:10:40.917 bw ( KiB/s): min=63952, max=71216, per=98.89%, avg=68672.00, stdev=4091.78, samples=3 00:10:40.917 iops : min=15988, max=17804, avg=17168.00, stdev=1022.94, samples=3 00:10:40.917 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:40.917 lat (msec) : 2=0.06%, 4=87.89%, 10=11.98%, 20=0.03% 00:10:40.917 cpu : usr=99.10%, sys=0.00%, ctx=3, majf=0, minf=607 00:10:40.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:40.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.917 issued rwts: total=34712,34740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.917 00:10:40.917 Run status group 0 (all jobs): 00:10:40.917 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:10:40.917 WRITE: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=136MiB (142MB), run=2001-2001msec 00:10:41.176 ----------------------------------------------------- 00:10:41.176 Suppressions used: 00:10:41.176 count bytes template 00:10:41.176 1 32 /usr/src/fio/parse.c 00:10:41.176 1 8 libtcmalloc_minimal.so 00:10:41.176 ----------------------------------------------------- 00:10:41.176 00:10:41.176 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:41.176 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:41.176 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:41.176 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:41.435 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:41.435 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:42.005 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:42.005 11:22:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:42.005 11:22:24 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:42.005 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:42.005 fio-3.35 00:10:42.005 Starting 1 thread 00:10:47.295 00:10:47.295 test: (groupid=0, jobs=1): err= 0: pid=65842: Fri Nov 15 11:22:29 2024 00:10:47.295 read: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec) 00:10:47.295 slat (nsec): min=4093, max=81473, avg=5768.18, stdev=2491.67 00:10:47.295 clat (usec): min=410, max=9801, avg=3586.75, stdev=438.67 00:10:47.295 lat (usec): min=425, max=9856, avg=3592.52, stdev=439.19 00:10:47.295 clat percentiles (usec): 00:10:47.295 | 1.00th=[ 2540], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3326], 00:10:47.295 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3589], 00:10:47.295 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 4047], 95.00th=[ 4424], 00:10:47.295 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 6849], 99.95th=[ 8094], 00:10:47.295 | 99.99th=[ 9503] 00:10:47.295 bw ( KiB/s): min=65736, max=75192, per=99.60%, avg=70693.33, stdev=4744.66, samples=3 00:10:47.295 iops : min=16434, max=18798, avg=17673.33, stdev=1186.16, samples=3 00:10:47.295 write: IOPS=17.7k, BW=69.3MiB/s (72.7MB/s)(139MiB/2001msec); 0 zone resets 00:10:47.295 slat (nsec): min=4319, max=49539, avg=6020.98, stdev=2521.09 00:10:47.295 clat (usec): min=296, max=9622, avg=3598.14, stdev=446.81 00:10:47.295 lat (usec): min=301, max=9663, avg=3604.16, stdev=447.32 00:10:47.295 clat percentiles (usec): 00:10:47.295 | 1.00th=[ 2507], 5.00th=[ 3163], 10.00th=[ 3261], 20.00th=[ 3359], 00:10:47.295 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3589], 00:10:47.295 | 70.00th=[ 3654], 80.00th=[ 3785], 90.00th=[ 4080], 95.00th=[ 4490], 00:10:47.295 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 7242], 99.95th=[ 8356], 00:10:47.295 | 99.99th=[ 9372] 00:10:47.295 bw ( KiB/s): min=66152, max=74952, per=99.55%, avg=70648.00, stdev=4403.14, samples=3 00:10:47.295 iops : min=16538, max=18738, avg=17662.00, stdev=1100.79, samples=3 00:10:47.295 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:47.295 lat (msec) : 2=0.35%, 4=87.86%, 10=11.76% 00:10:47.296 cpu : usr=99.00%, sys=0.15%, ctx=3, majf=0, minf=606 00:10:47.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:47.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.296 issued rwts: total=35506,35500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.296 00:10:47.296 Run status group 0 (all jobs): 00:10:47.296 READ: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:10:47.296 WRITE: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=139MiB (145MB), run=2001-2001msec 00:10:47.296 ----------------------------------------------------- 00:10:47.296 Suppressions used: 00:10:47.296 count bytes template 00:10:47.296 1 32 /usr/src/fio/parse.c 00:10:47.296 1 8 libtcmalloc_minimal.so 00:10:47.296 ----------------------------------------------------- 00:10:47.296 00:10:47.296 11:22:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:47.296 11:22:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:47.296 00:10:47.296 real 0m18.387s 00:10:47.296 user 0m14.152s 00:10:47.296 sys 0m3.959s 00:10:47.296 11:22:29 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.296 11:22:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:47.296 ************************************ 00:10:47.296 END TEST nvme_fio 00:10:47.296 ************************************ 00:10:47.296 00:10:47.296 real 1m32.540s 00:10:47.296 user 3m45.919s 00:10:47.296 sys 0m17.270s 00:10:47.296 11:22:29 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.296 11:22:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:47.296 ************************************ 00:10:47.296 END TEST nvme 00:10:47.296 ************************************ 00:10:47.296 11:22:29 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:47.296 11:22:29 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:47.296 11:22:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:47.296 11:22:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.296 11:22:29 -- common/autotest_common.sh@10 -- # set +x 00:10:47.296 ************************************ 00:10:47.296 START TEST nvme_scc 00:10:47.296 ************************************ 00:10:47.296 11:22:29 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:47.296 * Looking for test storage... 00:10:47.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:47.296 11:22:29 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:47.296 11:22:29 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:47.296 11:22:29 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.296 --rc genhtml_branch_coverage=1 00:10:47.296 --rc genhtml_function_coverage=1 00:10:47.296 --rc genhtml_legend=1 00:10:47.296 --rc geninfo_all_blocks=1 00:10:47.296 --rc geninfo_unexecuted_blocks=1 00:10:47.296 00:10:47.296 ' 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.296 --rc genhtml_branch_coverage=1 00:10:47.296 --rc genhtml_function_coverage=1 00:10:47.296 --rc genhtml_legend=1 00:10:47.296 --rc geninfo_all_blocks=1 00:10:47.296 --rc geninfo_unexecuted_blocks=1 00:10:47.296 00:10:47.296 ' 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.296 --rc genhtml_branch_coverage=1 00:10:47.296 --rc genhtml_function_coverage=1 00:10:47.296 --rc genhtml_legend=1 00:10:47.296 --rc geninfo_all_blocks=1 00:10:47.296 --rc geninfo_unexecuted_blocks=1 00:10:47.296 00:10:47.296 ' 00:10:47.296 11:22:30 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:47.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.296 --rc genhtml_branch_coverage=1 00:10:47.296 --rc genhtml_function_coverage=1 00:10:47.296 --rc genhtml_legend=1 00:10:47.296 --rc geninfo_all_blocks=1 00:10:47.296 --rc geninfo_unexecuted_blocks=1 00:10:47.296 00:10:47.296 ' 00:10:47.296 11:22:30 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:47.296 11:22:30 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:47.296 11:22:30 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:47.296 11:22:30 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:47.296 11:22:30 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.296 11:22:30 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.296 11:22:30 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.296 11:22:30 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.296 11:22:30 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.297 11:22:30 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:47.297 11:22:30 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:47.297 11:22:30 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:47.297 11:22:30 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.297 11:22:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:47.297 11:22:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:47.297 11:22:30 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:47.297 11:22:30 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:47.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.813 Waiting for block devices as requested 00:10:47.813 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.813 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.813 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.072 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.350 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:53.350 11:22:35 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:53.350 11:22:35 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:53.350 11:22:35 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:53.350 11:22:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.350 11:22:35 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.350 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:53.351 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:53.352 11:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:53.352 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.353 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:53.354 11:22:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:53.354 11:22:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:53.355 11:22:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:53.355 11:22:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.355 11:22:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:53.355 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:53.356 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.357 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.358 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:53.359 11:22:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:53.359 11:22:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:53.359 11:22:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.359 11:22:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.359 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.360 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:53.361 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:53.362 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.362 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:53.625 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.626 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.627 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.628 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.629 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:53.630 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:53.631 11:22:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:53.632 11:22:36 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:53.632 11:22:36 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:53.632 11:22:36 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.632 11:22:36 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:53.632 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.633 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.634 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:53.635 11:22:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:53.635 11:22:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:53.893 11:22:36 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:53.893 11:22:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:53.893 11:22:36 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:53.893 11:22:36 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:54.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:55.088 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.088 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.088 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.088 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.088 11:22:37 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:55.088 11:22:37 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:55.088 11:22:37 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.088 11:22:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:55.088 ************************************ 00:10:55.088 START TEST nvme_simple_copy 00:10:55.088 ************************************ 00:10:55.088 11:22:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:55.346 Initializing NVMe Controllers 00:10:55.346 Attaching to 0000:00:10.0 00:10:55.346 Controller supports SCC. Attached to 0000:00:10.0 00:10:55.346 Namespace ID: 1 size: 6GB 00:10:55.346 Initialization complete. 00:10:55.346 00:10:55.346 Controller QEMU NVMe Ctrl (12340 ) 00:10:55.346 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:55.346 Namespace Block Size:4096 00:10:55.346 Writing LBAs 0 to 63 with Random Data 00:10:55.346 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:55.346 LBAs matching Written Data: 64 00:10:55.346 00:10:55.346 real 0m0.396s 00:10:55.346 user 0m0.196s 00:10:55.346 sys 0m0.097s 00:10:55.346 11:22:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.346 11:22:38 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:55.346 ************************************ 00:10:55.346 END TEST nvme_simple_copy 00:10:55.346 ************************************ 00:10:55.346 ************************************ 00:10:55.346 END TEST nvme_scc 00:10:55.346 ************************************ 00:10:55.346 00:10:55.346 real 0m8.433s 00:10:55.346 user 0m1.567s 00:10:55.346 sys 0m1.709s 00:10:55.346 11:22:38 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.346 11:22:38 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:55.607 11:22:38 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:55.607 11:22:38 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:55.607 11:22:38 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:55.607 11:22:38 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:55.607 11:22:38 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:55.607 11:22:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.607 11:22:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.607 11:22:38 -- common/autotest_common.sh@10 -- # set +x 00:10:55.607 ************************************ 00:10:55.607 START TEST nvme_fdp 00:10:55.607 ************************************ 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:10:55.607 * Looking for test storage... 00:10:55.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.607 --rc genhtml_branch_coverage=1 00:10:55.607 --rc genhtml_function_coverage=1 00:10:55.607 --rc genhtml_legend=1 00:10:55.607 --rc geninfo_all_blocks=1 00:10:55.607 --rc geninfo_unexecuted_blocks=1 00:10:55.607 00:10:55.607 ' 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.607 --rc genhtml_branch_coverage=1 00:10:55.607 --rc genhtml_function_coverage=1 00:10:55.607 --rc genhtml_legend=1 00:10:55.607 --rc geninfo_all_blocks=1 00:10:55.607 --rc geninfo_unexecuted_blocks=1 00:10:55.607 00:10:55.607 ' 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.607 --rc genhtml_branch_coverage=1 00:10:55.607 --rc genhtml_function_coverage=1 00:10:55.607 --rc genhtml_legend=1 00:10:55.607 --rc geninfo_all_blocks=1 00:10:55.607 --rc geninfo_unexecuted_blocks=1 00:10:55.607 00:10:55.607 ' 00:10:55.607 11:22:38 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.607 --rc genhtml_branch_coverage=1 00:10:55.607 --rc genhtml_function_coverage=1 00:10:55.607 --rc genhtml_legend=1 00:10:55.607 --rc geninfo_all_blocks=1 00:10:55.607 --rc geninfo_unexecuted_blocks=1 00:10:55.607 00:10:55.607 ' 00:10:55.607 11:22:38 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:55.607 11:22:38 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:55.607 11:22:38 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:55.607 11:22:38 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:55.607 11:22:38 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.607 11:22:38 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.608 11:22:38 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.608 11:22:38 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.608 11:22:38 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.608 11:22:38 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:55.608 11:22:38 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:55.608 11:22:38 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:55.608 11:22:38 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:55.608 11:22:38 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:56.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:56.175 Waiting for block devices as requested 00:10:56.175 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.434 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.434 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:56.434 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:01.702 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:01.702 11:22:44 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:01.702 11:22:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:01.702 11:22:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:01.702 11:22:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:01.702 11:22:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.702 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:01.703 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.704 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:01.705 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:01.706 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:01.707 11:22:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:01.707 11:22:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:01.707 11:22:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:01.707 11:22:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:01.707 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.708 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:01.709 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.710 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.972 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.973 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:01.974 11:22:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:01.974 11:22:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:01.974 11:22:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:01.974 11:22:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.974 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.975 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.976 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:01.977 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:01.978 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.979 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:01.980 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:01.981 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.240 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:02.241 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:02.242 11:22:44 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:02.242 11:22:44 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:02.242 11:22:44 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:02.242 11:22:44 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.242 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.243 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.244 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:02.245 11:22:44 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:02.245 11:22:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:02.245 11:22:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:02.246 11:22:45 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:02.246 11:22:45 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:02.246 11:22:45 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:02.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:03.376 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:03.376 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:03.376 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:03.376 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:03.376 11:22:46 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:03.376 11:22:46 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:03.376 11:22:46 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.376 11:22:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:03.376 ************************************ 00:11:03.376 START TEST nvme_flexible_data_placement 00:11:03.376 ************************************ 00:11:03.376 11:22:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:03.634 Initializing NVMe Controllers 00:11:03.634 Attaching to 0000:00:13.0 00:11:03.634 Controller supports FDP Attached to 0000:00:13.0 00:11:03.634 Namespace ID: 1 Endurance Group ID: 1 00:11:03.634 Initialization complete. 00:11:03.634 00:11:03.634 ================================== 00:11:03.634 == FDP tests for Namespace: #01 == 00:11:03.634 ================================== 00:11:03.634 00:11:03.634 Get Feature: FDP: 00:11:03.634 ================= 00:11:03.634 Enabled: Yes 00:11:03.634 FDP configuration Index: 0 00:11:03.634 00:11:03.634 FDP configurations log page 00:11:03.634 =========================== 00:11:03.634 Number of FDP configurations: 1 00:11:03.634 Version: 0 00:11:03.634 Size: 112 00:11:03.634 FDP Configuration Descriptor: 0 00:11:03.634 Descriptor Size: 96 00:11:03.634 Reclaim Group Identifier format: 2 00:11:03.634 FDP Volatile Write Cache: Not Present 00:11:03.634 FDP Configuration: Valid 00:11:03.634 Vendor Specific Size: 0 00:11:03.634 Number of Reclaim Groups: 2 00:11:03.634 Number of Recalim Unit Handles: 8 00:11:03.634 Max Placement Identifiers: 128 00:11:03.634 Number of Namespaces Suppprted: 256 00:11:03.634 Reclaim unit Nominal Size: 6000000 bytes 00:11:03.634 Estimated Reclaim Unit Time Limit: Not Reported 00:11:03.634 RUH Desc #000: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #001: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #002: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #003: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #004: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #005: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #006: RUH Type: Initially Isolated 00:11:03.634 RUH Desc #007: RUH Type: Initially Isolated 00:11:03.634 00:11:03.634 FDP reclaim unit handle usage log page 00:11:03.634 ====================================== 00:11:03.634 Number of Reclaim Unit Handles: 8 00:11:03.634 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:03.634 RUH Usage Desc #001: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #002: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #003: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #004: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #005: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #006: RUH Attributes: Unused 00:11:03.634 RUH Usage Desc #007: RUH Attributes: Unused 00:11:03.634 00:11:03.634 FDP statistics log page 00:11:03.634 ======================= 00:11:03.634 Host bytes with metadata written: 837447680 00:11:03.634 Media bytes with metadata written: 837529600 00:11:03.634 Media bytes erased: 0 00:11:03.634 00:11:03.634 FDP Reclaim unit handle status 00:11:03.634 ============================== 00:11:03.634 Number of RUHS descriptors: 2 00:11:03.634 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004159 00:11:03.634 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:03.634 00:11:03.634 FDP write on placement id: 0 success 00:11:03.634 00:11:03.634 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:03.634 00:11:03.634 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:03.634 00:11:03.634 Get Feature: FDP Events for Placement handle: #0 00:11:03.634 ======================== 00:11:03.634 Number of FDP Events: 6 00:11:03.634 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:03.634 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:03.634 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:03.634 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:03.634 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:03.634 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:03.634 00:11:03.634 FDP events log page 00:11:03.634 =================== 00:11:03.634 Number of FDP events: 1 00:11:03.634 FDP Event #0: 00:11:03.634 Event Type: RU Not Written to Capacity 00:11:03.634 Placement Identifier: Valid 00:11:03.634 NSID: Valid 00:11:03.634 Location: Valid 00:11:03.634 Placement Identifier: 0 00:11:03.634 Event Timestamp: 9 00:11:03.634 Namespace Identifier: 1 00:11:03.634 Reclaim Group Identifier: 0 00:11:03.634 Reclaim Unit Handle Identifier: 0 00:11:03.634 00:11:03.634 FDP test passed 00:11:03.634 00:11:03.634 real 0m0.311s 00:11:03.634 user 0m0.115s 00:11:03.634 sys 0m0.094s 00:11:03.892 11:22:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.892 ************************************ 00:11:03.892 11:22:46 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:03.892 END TEST nvme_flexible_data_placement 00:11:03.892 ************************************ 00:11:03.892 00:11:03.892 real 0m8.307s 00:11:03.892 user 0m1.467s 00:11:03.892 sys 0m1.727s 00:11:03.892 11:22:46 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:03.892 11:22:46 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:03.892 ************************************ 00:11:03.892 END TEST nvme_fdp 00:11:03.892 ************************************ 00:11:03.892 11:22:46 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:03.892 11:22:46 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:03.892 11:22:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:03.892 11:22:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:03.892 11:22:46 -- common/autotest_common.sh@10 -- # set +x 00:11:03.892 ************************************ 00:11:03.892 START TEST nvme_rpc 00:11:03.892 ************************************ 00:11:03.892 11:22:46 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:03.892 * Looking for test storage... 00:11:03.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:03.892 11:22:46 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:03.892 11:22:46 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:03.892 11:22:46 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:04.151 11:22:46 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.151 11:22:46 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:04.151 11:22:46 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.151 11:22:46 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:04.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.151 --rc genhtml_branch_coverage=1 00:11:04.151 --rc genhtml_function_coverage=1 00:11:04.151 --rc genhtml_legend=1 00:11:04.151 --rc geninfo_all_blocks=1 00:11:04.151 --rc geninfo_unexecuted_blocks=1 00:11:04.151 00:11:04.151 ' 00:11:04.151 11:22:46 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:04.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.151 --rc genhtml_branch_coverage=1 00:11:04.151 --rc genhtml_function_coverage=1 00:11:04.151 --rc genhtml_legend=1 00:11:04.152 --rc geninfo_all_blocks=1 00:11:04.152 --rc geninfo_unexecuted_blocks=1 00:11:04.152 00:11:04.152 ' 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:04.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.152 --rc genhtml_branch_coverage=1 00:11:04.152 --rc genhtml_function_coverage=1 00:11:04.152 --rc genhtml_legend=1 00:11:04.152 --rc geninfo_all_blocks=1 00:11:04.152 --rc geninfo_unexecuted_blocks=1 00:11:04.152 00:11:04.152 ' 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:04.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.152 --rc genhtml_branch_coverage=1 00:11:04.152 --rc genhtml_function_coverage=1 00:11:04.152 --rc genhtml_legend=1 00:11:04.152 --rc geninfo_all_blocks=1 00:11:04.152 --rc geninfo_unexecuted_blocks=1 00:11:04.152 00:11:04.152 ' 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67215 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:04.152 11:22:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67215 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67215 ']' 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.152 11:22:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.152 [2024-11-15 11:22:47.095182] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:04.152 [2024-11-15 11:22:47.095371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67215 ] 00:11:04.411 [2024-11-15 11:22:47.290567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:04.690 [2024-11-15 11:22:47.447360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.690 [2024-11-15 11:22:47.447399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.623 11:22:48 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.623 11:22:48 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:05.623 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:05.881 Nvme0n1 00:11:05.881 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:05.881 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:06.194 request: 00:11:06.194 { 00:11:06.194 "bdev_name": "Nvme0n1", 00:11:06.194 "filename": "non_existing_file", 00:11:06.194 "method": "bdev_nvme_apply_firmware", 00:11:06.194 "req_id": 1 00:11:06.194 } 00:11:06.194 Got JSON-RPC error response 00:11:06.194 response: 00:11:06.194 { 00:11:06.194 "code": -32603, 00:11:06.194 "message": "open file failed." 00:11:06.194 } 00:11:06.194 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:06.194 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:06.194 11:22:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:06.452 11:22:49 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:06.452 11:22:49 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67215 00:11:06.452 11:22:49 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67215 ']' 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67215 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67215 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67215' 00:11:06.453 killing process with pid 67215 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67215 00:11:06.453 11:22:49 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67215 00:11:08.982 00:11:08.982 real 0m4.686s 00:11:08.982 user 0m8.857s 00:11:08.982 sys 0m0.780s 00:11:08.982 11:22:51 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.982 11:22:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.982 ************************************ 00:11:08.982 END TEST nvme_rpc 00:11:08.982 ************************************ 00:11:08.982 11:22:51 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.982 11:22:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.982 11:22:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.982 11:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:08.982 ************************************ 00:11:08.982 START TEST nvme_rpc_timeouts 00:11:08.982 ************************************ 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:08.982 * Looking for test storage... 00:11:08.982 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.982 11:22:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.982 --rc genhtml_branch_coverage=1 00:11:08.982 --rc genhtml_function_coverage=1 00:11:08.982 --rc genhtml_legend=1 00:11:08.982 --rc geninfo_all_blocks=1 00:11:08.982 --rc geninfo_unexecuted_blocks=1 00:11:08.982 00:11:08.982 ' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.982 --rc genhtml_branch_coverage=1 00:11:08.982 --rc genhtml_function_coverage=1 00:11:08.982 --rc genhtml_legend=1 00:11:08.982 --rc geninfo_all_blocks=1 00:11:08.982 --rc geninfo_unexecuted_blocks=1 00:11:08.982 00:11:08.982 ' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.982 --rc genhtml_branch_coverage=1 00:11:08.982 --rc genhtml_function_coverage=1 00:11:08.982 --rc genhtml_legend=1 00:11:08.982 --rc geninfo_all_blocks=1 00:11:08.982 --rc geninfo_unexecuted_blocks=1 00:11:08.982 00:11:08.982 ' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.982 --rc genhtml_branch_coverage=1 00:11:08.982 --rc genhtml_function_coverage=1 00:11:08.982 --rc genhtml_legend=1 00:11:08.982 --rc geninfo_all_blocks=1 00:11:08.982 --rc geninfo_unexecuted_blocks=1 00:11:08.982 00:11:08.982 ' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67291 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67291 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67328 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:08.982 11:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67328 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67328 ']' 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.982 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.983 11:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:08.983 [2024-11-15 11:22:51.744551] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:11:08.983 [2024-11-15 11:22:51.744776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67328 ] 00:11:09.240 [2024-11-15 11:22:51.937116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:09.240 [2024-11-15 11:22:52.092541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.240 [2024-11-15 11:22:52.092558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.174 11:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.174 Checking default timeout settings: 00:11:10.174 11:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:11:10.174 11:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:10.174 11:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:10.740 Making settings changes with rpc: 00:11:10.740 11:22:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:10.740 11:22:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:10.998 Check default vs. modified settings: 00:11:10.998 11:22:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:10.998 11:22:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:11.256 Setting action_on_timeout is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:11.256 Setting timeout_us is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:11.256 Setting timeout_admin_us is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67291 /tmp/settings_modified_67291 00:11:11.256 11:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67328 00:11:11.256 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67328 ']' 00:11:11.256 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67328 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67328 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.514 killing process with pid 67328 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67328' 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67328 00:11:11.514 11:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67328 00:11:14.045 RPC TIMEOUT SETTING TEST PASSED. 00:11:14.045 11:22:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:14.045 ************************************ 00:11:14.045 END TEST nvme_rpc_timeouts 00:11:14.045 ************************************ 00:11:14.045 00:11:14.045 real 0m4.987s 00:11:14.045 user 0m9.701s 00:11:14.045 sys 0m0.814s 00:11:14.045 11:22:56 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.045 11:22:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:14.045 11:22:56 -- spdk/autotest.sh@239 -- # uname -s 00:11:14.045 11:22:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:14.045 11:22:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:14.045 11:22:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.045 11:22:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.045 11:22:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.045 ************************************ 00:11:14.045 START TEST sw_hotplug 00:11:14.045 ************************************ 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:14.045 * Looking for test storage... 00:11:14.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.045 11:22:56 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.045 --rc genhtml_branch_coverage=1 00:11:14.045 --rc genhtml_function_coverage=1 00:11:14.045 --rc genhtml_legend=1 00:11:14.045 --rc geninfo_all_blocks=1 00:11:14.045 --rc geninfo_unexecuted_blocks=1 00:11:14.045 00:11:14.045 ' 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.045 --rc genhtml_branch_coverage=1 00:11:14.045 --rc genhtml_function_coverage=1 00:11:14.045 --rc genhtml_legend=1 00:11:14.045 --rc geninfo_all_blocks=1 00:11:14.045 --rc geninfo_unexecuted_blocks=1 00:11:14.045 00:11:14.045 ' 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.045 --rc genhtml_branch_coverage=1 00:11:14.045 --rc genhtml_function_coverage=1 00:11:14.045 --rc genhtml_legend=1 00:11:14.045 --rc geninfo_all_blocks=1 00:11:14.045 --rc geninfo_unexecuted_blocks=1 00:11:14.045 00:11:14.045 ' 00:11:14.045 11:22:56 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.045 --rc genhtml_branch_coverage=1 00:11:14.045 --rc genhtml_function_coverage=1 00:11:14.045 --rc genhtml_legend=1 00:11:14.045 --rc geninfo_all_blocks=1 00:11:14.045 --rc geninfo_unexecuted_blocks=1 00:11:14.045 00:11:14.045 ' 00:11:14.045 11:22:56 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.304 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.304 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.304 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.304 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:14.304 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:14.304 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:14.304 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:14.304 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:14.304 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:14.562 11:22:57 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:14.562 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:14.562 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:14.562 11:22:57 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:14.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.080 Waiting for block devices as requested 00:11:15.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.080 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.080 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.338 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:20.633 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:20.633 11:23:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:20.633 11:23:03 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:20.892 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:20.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:20.892 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:21.149 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:21.408 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68203 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:21.666 11:23:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:21.666 11:23:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:21.925 Initializing NVMe Controllers 00:11:21.925 Attaching to 0000:00:10.0 00:11:21.925 Attaching to 0000:00:11.0 00:11:21.925 Attached to 0000:00:10.0 00:11:21.925 Attached to 0000:00:11.0 00:11:21.925 Initialization complete. Starting I/O... 00:11:21.925 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:21.925 QEMU NVMe Ctrl (12341 ): 6 I/Os completed (+6) 00:11:21.925 00:11:22.861 QEMU NVMe Ctrl (12340 ): 1146 I/Os completed (+1146) 00:11:22.861 QEMU NVMe Ctrl (12341 ): 1191 I/Os completed (+1185) 00:11:22.861 00:11:23.866 QEMU NVMe Ctrl (12340 ): 2690 I/Os completed (+1544) 00:11:23.866 QEMU NVMe Ctrl (12341 ): 2764 I/Os completed (+1573) 00:11:23.866 00:11:25.242 QEMU NVMe Ctrl (12340 ): 4346 I/Os completed (+1656) 00:11:25.242 QEMU NVMe Ctrl (12341 ): 4456 I/Os completed (+1692) 00:11:25.242 00:11:26.177 QEMU NVMe Ctrl (12340 ): 5793 I/Os completed (+1447) 00:11:26.177 QEMU NVMe Ctrl (12341 ): 6051 I/Os completed (+1595) 00:11:26.177 00:11:27.113 QEMU NVMe Ctrl (12340 ): 7433 I/Os completed (+1640) 00:11:27.113 QEMU NVMe Ctrl (12341 ): 7715 I/Os completed (+1664) 00:11:27.113 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.680 [2024-11-15 11:23:10.528451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:27.680 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:27.680 [2024-11-15 11:23:10.530474] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.530544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.530574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.530613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:27.680 [2024-11-15 11:23:10.534411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.534536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.534563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.534587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:27.680 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:27.680 EAL: Scan for (pci) bus failed. 00:11:27.680 [2024-11-15 11:23:10.556770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:27.680 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:27.680 [2024-11-15 11:23:10.558748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.558834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.558867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.558890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:27.680 [2024-11-15 11:23:10.561969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.562086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.562122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 [2024-11-15 11:23:10.562144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:27.680 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:27.680 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:27.680 EAL: Scan for (pci) bus failed. 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:27.938 Attaching to 0000:00:10.0 00:11:27.938 Attached to 0000:00:10.0 00:11:27.938 QEMU NVMe Ctrl (12340 ): 85 I/Os completed (+85) 00:11:27.938 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.938 11:23:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.939 Attaching to 0000:00:11.0 00:11:27.939 Attached to 0000:00:11.0 00:11:28.872 QEMU NVMe Ctrl (12340 ): 1629 I/Os completed (+1544) 00:11:28.872 QEMU NVMe Ctrl (12341 ): 1583 I/Os completed (+1583) 00:11:28.872 00:11:30.246 QEMU NVMe Ctrl (12340 ): 3325 I/Os completed (+1696) 00:11:30.246 QEMU NVMe Ctrl (12341 ): 3311 I/Os completed (+1728) 00:11:30.246 00:11:31.198 QEMU NVMe Ctrl (12340 ): 5017 I/Os completed (+1692) 00:11:31.198 QEMU NVMe Ctrl (12341 ): 5050 I/Os completed (+1739) 00:11:31.198 00:11:32.130 QEMU NVMe Ctrl (12340 ): 6701 I/Os completed (+1684) 00:11:32.130 QEMU NVMe Ctrl (12341 ): 6762 I/Os completed (+1712) 00:11:32.130 00:11:33.064 QEMU NVMe Ctrl (12340 ): 8409 I/Os completed (+1708) 00:11:33.064 QEMU NVMe Ctrl (12341 ): 8517 I/Os completed (+1755) 00:11:33.064 00:11:33.994 QEMU NVMe Ctrl (12340 ): 10201 I/Os completed (+1792) 00:11:33.994 QEMU NVMe Ctrl (12341 ): 10325 I/Os completed (+1808) 00:11:33.994 00:11:34.928 QEMU NVMe Ctrl (12340 ): 11904 I/Os completed (+1703) 00:11:34.928 QEMU NVMe Ctrl (12341 ): 12087 I/Os completed (+1762) 00:11:34.928 00:11:35.860 QEMU NVMe Ctrl (12340 ): 13552 I/Os completed (+1648) 00:11:35.860 QEMU NVMe Ctrl (12341 ): 13819 I/Os completed (+1732) 00:11:35.860 00:11:37.233 QEMU NVMe Ctrl (12340 ): 15328 I/Os completed (+1776) 00:11:37.233 QEMU NVMe Ctrl (12341 ): 15617 I/Os completed (+1798) 00:11:37.233 00:11:38.204 QEMU NVMe Ctrl (12340 ): 17136 I/Os completed (+1808) 00:11:38.204 QEMU NVMe Ctrl (12341 ): 17456 I/Os completed (+1839) 00:11:38.204 00:11:39.138 QEMU NVMe Ctrl (12340 ): 18980 I/Os completed (+1844) 00:11:39.138 QEMU NVMe Ctrl (12341 ): 19325 I/Os completed (+1869) 00:11:39.138 00:11:40.145 QEMU NVMe Ctrl (12340 ): 20876 I/Os completed (+1896) 00:11:40.145 QEMU NVMe Ctrl (12341 ): 21233 I/Os completed (+1908) 00:11:40.145 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.145 [2024-11-15 11:23:22.854685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:40.145 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:40.145 [2024-11-15 11:23:22.857299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.857385] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.857428] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.857466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:40.145 [2024-11-15 11:23:22.860603] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.860662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.860716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.860745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.145 [2024-11-15 11:23:22.882633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:40.145 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:40.145 [2024-11-15 11:23:22.884796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.884855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.884896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.884920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:40.145 [2024-11-15 11:23:22.887704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.887764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.887790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 [2024-11-15 11:23:22.887812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.145 11:23:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.145 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.404 Attaching to 0000:00:10.0 00:11:40.404 Attached to 0000:00:10.0 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.404 11:23:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.404 Attaching to 0000:00:11.0 00:11:40.404 Attached to 0000:00:11.0 00:11:40.971 QEMU NVMe Ctrl (12340 ): 1276 I/Os completed (+1276) 00:11:40.971 QEMU NVMe Ctrl (12341 ): 1141 I/Os completed (+1141) 00:11:40.971 00:11:41.907 QEMU NVMe Ctrl (12340 ): 3172 I/Os completed (+1896) 00:11:41.907 QEMU NVMe Ctrl (12341 ): 3049 I/Os completed (+1908) 00:11:41.907 00:11:43.282 QEMU NVMe Ctrl (12340 ): 5068 I/Os completed (+1896) 00:11:43.282 QEMU NVMe Ctrl (12341 ): 4954 I/Os completed (+1905) 00:11:43.282 00:11:44.216 QEMU NVMe Ctrl (12340 ): 7036 I/Os completed (+1968) 00:11:44.216 QEMU NVMe Ctrl (12341 ): 6942 I/Os completed (+1988) 00:11:44.216 00:11:45.151 QEMU NVMe Ctrl (12340 ): 8984 I/Os completed (+1948) 00:11:45.151 QEMU NVMe Ctrl (12341 ): 8918 I/Os completed (+1976) 00:11:45.151 00:11:46.086 QEMU NVMe Ctrl (12340 ): 10840 I/Os completed (+1856) 00:11:46.086 QEMU NVMe Ctrl (12341 ): 10812 I/Os completed (+1894) 00:11:46.086 00:11:47.022 QEMU NVMe Ctrl (12340 ): 12812 I/Os completed (+1972) 00:11:47.022 QEMU NVMe Ctrl (12341 ): 12808 I/Os completed (+1996) 00:11:47.022 00:11:47.958 QEMU NVMe Ctrl (12340 ): 14652 I/Os completed (+1840) 00:11:47.958 QEMU NVMe Ctrl (12341 ): 14669 I/Os completed (+1861) 00:11:47.958 00:11:48.893 QEMU NVMe Ctrl (12340 ): 16520 I/Os completed (+1868) 00:11:48.893 QEMU NVMe Ctrl (12341 ): 16550 I/Os completed (+1881) 00:11:48.893 00:11:50.267 QEMU NVMe Ctrl (12340 ): 18400 I/Os completed (+1880) 00:11:50.267 QEMU NVMe Ctrl (12341 ): 18452 I/Os completed (+1902) 00:11:50.267 00:11:51.238 QEMU NVMe Ctrl (12340 ): 20272 I/Os completed (+1872) 00:11:51.238 QEMU NVMe Ctrl (12341 ): 20345 I/Os completed (+1893) 00:11:51.238 00:11:52.173 QEMU NVMe Ctrl (12340 ): 22128 I/Os completed (+1856) 00:11:52.173 QEMU NVMe Ctrl (12341 ): 22249 I/Os completed (+1904) 00:11:52.173 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.432 [2024-11-15 11:23:35.186501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.432 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:52.432 [2024-11-15 11:23:35.189711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.189805] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.189852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.189895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:52.432 [2024-11-15 11:23:35.194528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.194619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.194658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.194699] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.432 [2024-11-15 11:23:35.221019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.432 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:52.432 [2024-11-15 11:23:35.223893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.223978] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.224057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.224115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:52.432 [2024-11-15 11:23:35.228359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.228455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.228500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 [2024-11-15 11:23:35.228535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.432 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:52.432 EAL: Scan for (pci) bus failed. 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.432 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.691 Attaching to 0000:00:10.0 00:11:52.691 Attached to 0000:00:10.0 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.691 11:23:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.691 Attaching to 0000:00:11.0 00:11:52.691 Attached to 0000:00:11.0 00:11:52.691 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:52.691 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:52.691 [2024-11-15 11:23:35.557016] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:04.891 11:23:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:04.891 11:23:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.891 11:23:47 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.03 00:12:04.891 11:23:47 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.03 00:12:04.891 11:23:47 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:04.891 11:23:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.03 00:12:04.891 11:23:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.03 2 00:12:04.891 remove_attach_helper took 43.03s to complete (handling 2 nvme drive(s)) 11:23:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68203 00:12:11.449 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68203) - No such process 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68203 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68752 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:11.449 11:23:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68752 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68752 ']' 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.449 11:23:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.449 [2024-11-15 11:23:53.688248] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:12:11.449 [2024-11-15 11:23:53.688671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68752 ] 00:12:11.449 [2024-11-15 11:23:53.867222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.449 [2024-11-15 11:23:53.991091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:12.014 11:23:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:12.014 11:23:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.570 11:24:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.570 11:24:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.570 11:24:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.570 [2024-11-15 11:24:00.930719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.570 [2024-11-15 11:24:00.935921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:00.935993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:00.936020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:00.936065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:00.936086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:00.936105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:00.936123] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:00.936142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:00.936158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:00.936181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:00.936197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:00.936214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:18.570 11:24:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:18.570 [2024-11-15 11:24:01.330723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:18.570 [2024-11-15 11:24:01.333949] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:01.334203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:01.334244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:01.334277] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:01.334299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:01.334316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:01.334337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:01.334353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:01.334372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 [2024-11-15 11:24:01.334388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.570 [2024-11-15 11:24:01.334407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.570 [2024-11-15 11:24:01.334424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.570 11:24:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.570 11:24:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.570 11:24:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:18.570 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.827 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:19.084 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.084 11:24:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.359 11:24:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.359 11:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.359 11:24:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.359 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.359 11:24:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.359 11:24:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.359 [2024-11-15 11:24:13.931003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:31.359 [2024-11-15 11:24:13.933928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.359 [2024-11-15 11:24:13.934167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.359 [2024-11-15 11:24:13.934201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.359 [2024-11-15 11:24:13.934235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.359 [2024-11-15 11:24:13.934254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.359 [2024-11-15 11:24:13.934274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.359 [2024-11-15 11:24:13.934292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.359 [2024-11-15 11:24:13.934310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.360 [2024-11-15 11:24:13.934326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.360 [2024-11-15 11:24:13.934346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.360 [2024-11-15 11:24:13.934362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.360 [2024-11-15 11:24:13.934381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.360 11:24:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.360 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:31.360 11:24:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:31.618 [2024-11-15 11:24:14.330990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:31.618 [2024-11-15 11:24:14.333726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.618 [2024-11-15 11:24:14.333947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.618 [2024-11-15 11:24:14.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.618 [2024-11-15 11:24:14.334019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.618 [2024-11-15 11:24:14.334039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.618 [2024-11-15 11:24:14.334071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.618 [2024-11-15 11:24:14.334093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.618 [2024-11-15 11:24:14.334109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.618 [2024-11-15 11:24:14.334127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.618 [2024-11-15 11:24:14.334143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:31.618 [2024-11-15 11:24:14.334168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:31.618 [2024-11-15 11:24:14.334184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.618 11:24:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.618 11:24:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.618 11:24:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:31.618 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:31.876 11:24:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.080 11:24:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.080 11:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.080 11:24:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.080 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.080 11:24:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.080 11:24:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.080 [2024-11-15 11:24:26.932167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:44.081 [2024-11-15 11:24:26.935293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.081 [2024-11-15 11:24:26.935358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.081 [2024-11-15 11:24:26.935397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.081 [2024-11-15 11:24:26.935432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.081 [2024-11-15 11:24:26.935466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.081 [2024-11-15 11:24:26.935494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.081 [2024-11-15 11:24:26.935513] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.081 [2024-11-15 11:24:26.935536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.081 [2024-11-15 11:24:26.935552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.081 [2024-11-15 11:24:26.935575] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.081 [2024-11-15 11:24:26.935592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.081 [2024-11-15 11:24:26.935614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.081 11:24:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.081 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:44.081 11:24:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:44.647 [2024-11-15 11:24:27.332152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:44.647 [2024-11-15 11:24:27.335215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.648 [2024-11-15 11:24:27.335454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.648 [2024-11-15 11:24:27.335513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.648 [2024-11-15 11:24:27.335544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.648 [2024-11-15 11:24:27.335564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.648 [2024-11-15 11:24:27.335580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.648 [2024-11-15 11:24:27.335602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.648 [2024-11-15 11:24:27.335618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.648 [2024-11-15 11:24:27.335639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.648 [2024-11-15 11:24:27.335655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:44.648 [2024-11-15 11:24:27.335673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.648 [2024-11-15 11:24:27.335688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:44.648 11:24:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.648 11:24:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:44.648 11:24:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:44.648 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:44.906 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:44.907 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:44.907 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:44.907 11:24:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.04 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.04 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.04 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.04 2 00:12:57.108 remove_attach_helper took 45.04s to complete (handling 2 nvme drive(s)) 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.108 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.108 11:24:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:57.109 11:24:39 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:57.109 11:24:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.672 11:24:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.672 11:24:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.672 11:24:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 11:24:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.672 [2024-11-15 11:24:46.007597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.672 [2024-11-15 11:24:46.009656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.009713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.009737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.009769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.009787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.009805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.009822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.009854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.009870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.009889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.009904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.009928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:03.672 [2024-11-15 11:24:46.407585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:03.672 [2024-11-15 11:24:46.409868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.409929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.409953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.409975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.409993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.410007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.410024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.410038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.410087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 [2024-11-15 11:24:46.410105] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.672 [2024-11-15 11:24:46.410122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.672 [2024-11-15 11:24:46.410136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.672 11:24:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.672 11:24:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.672 11:24:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:03.672 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.932 11:24:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.141 11:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.141 11:24:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.141 [2024-11-15 11:24:59.007749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:16.141 [2024-11-15 11:24:59.009856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.141 [2024-11-15 11:24:59.010068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.141 [2024-11-15 11:24:59.010256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.141 [2024-11-15 11:24:59.010415] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.141 [2024-11-15 11:24:59.010590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.141 [2024-11-15 11:24:59.010730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.141 [2024-11-15 11:24:59.010876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.141 [2024-11-15 11:24:59.011010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.141 [2024-11-15 11:24:59.011199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.141 [2024-11-15 11:24:59.011387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.141 [2024-11-15 11:24:59.011529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.141 [2024-11-15 11:24:59.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.141 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:16.141 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:16.707 [2024-11-15 11:24:59.407768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:16.707 [2024-11-15 11:24:59.410375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.707 [2024-11-15 11:24:59.410624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.707 [2024-11-15 11:24:59.410812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.707 [2024-11-15 11:24:59.411111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.707 [2024-11-15 11:24:59.411256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.707 [2024-11-15 11:24:59.411396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.707 [2024-11-15 11:24:59.411574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.708 [2024-11-15 11:24:59.411759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.708 [2024-11-15 11:24:59.411936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.708 [2024-11-15 11:24:59.412203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:16.708 [2024-11-15 11:24:59.412457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:16.708 [2024-11-15 11:24:59.412598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.708 11:24:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.708 11:24:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.708 11:24:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:16.708 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:16.965 11:24:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.163 11:25:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.163 11:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.163 11:25:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.163 11:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.163 11:25:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.163 11:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.163 11:25:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.163 [2024-11-15 11:25:12.007898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:29.163 [2024-11-15 11:25:12.009827] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.163 [2024-11-15 11:25:12.009897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.163 [2024-11-15 11:25:12.009919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.163 [2024-11-15 11:25:12.009948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.163 [2024-11-15 11:25:12.009964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.164 [2024-11-15 11:25:12.009981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.164 [2024-11-15 11:25:12.009996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.164 [2024-11-15 11:25:12.010015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.164 [2024-11-15 11:25:12.010030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.164 [2024-11-15 11:25:12.010098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.164 [2024-11-15 11:25:12.010117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.164 [2024-11-15 11:25:12.010134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.164 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:29.164 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:29.729 [2024-11-15 11:25:12.407925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:29.729 [2024-11-15 11:25:12.410426] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.729 [2024-11-15 11:25:12.410494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.729 [2024-11-15 11:25:12.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.729 [2024-11-15 11:25:12.410549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.729 [2024-11-15 11:25:12.410570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.729 [2024-11-15 11:25:12.410586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.729 [2024-11-15 11:25:12.410606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.729 [2024-11-15 11:25:12.410622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.729 [2024-11-15 11:25:12.410644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.729 [2024-11-15 11:25:12.410692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:29.729 [2024-11-15 11:25:12.410714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.729 [2024-11-15 11:25:12.410729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.729 11:25:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.729 11:25:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.729 11:25:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.729 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.987 11:25:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.01 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.01 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.01 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.01 2 00:13:42.283 remove_attach_helper took 45.01s to complete (handling 2 nvme drive(s)) 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:42.283 11:25:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68752 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68752 ']' 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68752 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68752 00:13:42.283 killing process with pid 68752 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68752' 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68752 00:13:42.283 11:25:24 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68752 00:13:44.181 11:25:27 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:44.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:45.005 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.005 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:45.263 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.263 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:45.263 ************************************ 00:13:45.263 END TEST sw_hotplug 00:13:45.263 ************************************ 00:13:45.263 00:13:45.263 real 2m31.627s 00:13:45.263 user 1m52.611s 00:13:45.263 sys 0m18.614s 00:13:45.263 11:25:28 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.263 11:25:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:45.263 11:25:28 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:45.263 11:25:28 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:45.263 11:25:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:45.263 11:25:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.263 11:25:28 -- common/autotest_common.sh@10 -- # set +x 00:13:45.263 ************************************ 00:13:45.263 START TEST nvme_xnvme 00:13:45.263 ************************************ 00:13:45.263 11:25:28 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:45.522 * Looking for test storage... 00:13:45.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:45.522 11:25:28 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:45.522 11:25:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:45.522 11:25:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:45.522 11:25:28 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.522 11:25:28 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:45.522 11:25:28 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.523 --rc genhtml_branch_coverage=1 00:13:45.523 --rc genhtml_function_coverage=1 00:13:45.523 --rc genhtml_legend=1 00:13:45.523 --rc geninfo_all_blocks=1 00:13:45.523 --rc geninfo_unexecuted_blocks=1 00:13:45.523 00:13:45.523 ' 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.523 --rc genhtml_branch_coverage=1 00:13:45.523 --rc genhtml_function_coverage=1 00:13:45.523 --rc genhtml_legend=1 00:13:45.523 --rc geninfo_all_blocks=1 00:13:45.523 --rc geninfo_unexecuted_blocks=1 00:13:45.523 00:13:45.523 ' 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.523 --rc genhtml_branch_coverage=1 00:13:45.523 --rc genhtml_function_coverage=1 00:13:45.523 --rc genhtml_legend=1 00:13:45.523 --rc geninfo_all_blocks=1 00:13:45.523 --rc geninfo_unexecuted_blocks=1 00:13:45.523 00:13:45.523 ' 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:45.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.523 --rc genhtml_branch_coverage=1 00:13:45.523 --rc genhtml_function_coverage=1 00:13:45.523 --rc genhtml_legend=1 00:13:45.523 --rc geninfo_all_blocks=1 00:13:45.523 --rc geninfo_unexecuted_blocks=1 00:13:45.523 00:13:45.523 ' 00:13:45.523 11:25:28 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.523 11:25:28 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.523 11:25:28 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.523 11:25:28 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.523 11:25:28 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.523 11:25:28 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.523 11:25:28 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.523 11:25:28 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.523 11:25:28 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:45.523 11:25:28 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.523 11:25:28 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.523 11:25:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.523 ************************************ 00:13:45.523 START TEST xnvme_to_malloc_dd_copy 00:13:45.523 ************************************ 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:45.523 11:25:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:45.523 { 00:13:45.523 "subsystems": [ 00:13:45.523 { 00:13:45.523 "subsystem": "bdev", 00:13:45.523 "config": [ 00:13:45.523 { 00:13:45.523 "params": { 00:13:45.523 "block_size": 512, 00:13:45.523 "num_blocks": 2097152, 00:13:45.523 "name": "malloc0" 00:13:45.523 }, 00:13:45.523 "method": "bdev_malloc_create" 00:13:45.523 }, 00:13:45.523 { 00:13:45.523 "params": { 00:13:45.523 "io_mechanism": "libaio", 00:13:45.523 "filename": "/dev/nullb0", 00:13:45.523 "name": "null0" 00:13:45.523 }, 00:13:45.523 "method": "bdev_xnvme_create" 00:13:45.523 }, 00:13:45.523 { 00:13:45.523 "method": "bdev_wait_for_examine" 00:13:45.523 } 00:13:45.523 ] 00:13:45.523 } 00:13:45.523 ] 00:13:45.523 } 00:13:45.781 [2024-11-15 11:25:28.498167] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:45.781 [2024-11-15 11:25:28.498652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70104 ] 00:13:45.781 [2024-11-15 11:25:28.685459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.040 [2024-11-15 11:25:28.809110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.668  [2024-11-15T11:25:32.185Z] Copying: 209/1024 [MB] (209 MBps) [2024-11-15T11:25:33.120Z] Copying: 409/1024 [MB] (200 MBps) [2024-11-15T11:25:34.497Z] Copying: 616/1024 [MB] (206 MBps) [2024-11-15T11:25:35.064Z] Copying: 827/1024 [MB] (210 MBps) [2024-11-15T11:25:38.356Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:13:55.407 00:13:55.407 11:25:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:55.407 11:25:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:55.407 11:25:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:55.407 11:25:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:55.407 { 00:13:55.407 "subsystems": [ 00:13:55.407 { 00:13:55.407 "subsystem": "bdev", 00:13:55.407 "config": [ 00:13:55.407 { 00:13:55.407 "params": { 00:13:55.407 "block_size": 512, 00:13:55.407 "num_blocks": 2097152, 00:13:55.407 "name": "malloc0" 00:13:55.407 }, 00:13:55.407 "method": "bdev_malloc_create" 00:13:55.407 }, 00:13:55.407 { 00:13:55.407 "params": { 00:13:55.407 "io_mechanism": "libaio", 00:13:55.407 "filename": "/dev/nullb0", 00:13:55.407 "name": "null0" 00:13:55.407 }, 00:13:55.407 "method": "bdev_xnvme_create" 00:13:55.407 }, 00:13:55.407 { 00:13:55.407 "method": "bdev_wait_for_examine" 00:13:55.407 } 00:13:55.407 ] 00:13:55.407 } 00:13:55.407 ] 00:13:55.407 } 00:13:55.407 [2024-11-15 11:25:38.272961] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:13:55.407 [2024-11-15 11:25:38.273146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70220 ] 00:13:55.675 [2024-11-15 11:25:38.439462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.675 [2024-11-15 11:25:38.548216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.207  [2024-11-15T11:25:42.093Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-15T11:25:43.030Z] Copying: 385/1024 [MB] (196 MBps) [2024-11-15T11:25:43.965Z] Copying: 588/1024 [MB] (203 MBps) [2024-11-15T11:25:44.901Z] Copying: 803/1024 [MB] (214 MBps) [2024-11-15T11:25:45.159Z] Copying: 1014/1024 [MB] (211 MBps) [2024-11-15T11:25:48.443Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:14:05.494 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:05.494 11:25:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 { 00:14:05.752 "subsystems": [ 00:14:05.752 { 00:14:05.752 "subsystem": "bdev", 00:14:05.752 "config": [ 00:14:05.752 { 00:14:05.752 "params": { 00:14:05.752 "block_size": 512, 00:14:05.752 "num_blocks": 2097152, 00:14:05.752 "name": "malloc0" 00:14:05.752 }, 00:14:05.752 "method": "bdev_malloc_create" 00:14:05.752 }, 00:14:05.752 { 00:14:05.752 "params": { 00:14:05.752 "io_mechanism": "io_uring", 00:14:05.752 "filename": "/dev/nullb0", 00:14:05.752 "name": "null0" 00:14:05.752 }, 00:14:05.752 "method": "bdev_xnvme_create" 00:14:05.752 }, 00:14:05.752 { 00:14:05.752 "method": "bdev_wait_for_examine" 00:14:05.752 } 00:14:05.752 ] 00:14:05.752 } 00:14:05.752 ] 00:14:05.752 } 00:14:05.752 [2024-11-15 11:25:48.513216] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:05.752 [2024-11-15 11:25:48.513397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70335 ] 00:14:05.752 [2024-11-15 11:25:48.698010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.010 [2024-11-15 11:25:48.819937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.551  [2024-11-15T11:25:52.435Z] Copying: 170/1024 [MB] (170 MBps) [2024-11-15T11:25:53.368Z] Copying: 344/1024 [MB] (173 MBps) [2024-11-15T11:25:54.303Z] Copying: 518/1024 [MB] (174 MBps) [2024-11-15T11:25:55.239Z] Copying: 697/1024 [MB] (179 MBps) [2024-11-15T11:25:56.173Z] Copying: 871/1024 [MB] (173 MBps) [2024-11-15T11:25:59.460Z] Copying: 1024/1024 [MB] (average 173 MBps) 00:14:16.511 00:14:16.511 11:25:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:16.511 11:25:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:16.511 11:25:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:16.511 11:25:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:16.511 { 00:14:16.511 "subsystems": [ 00:14:16.511 { 00:14:16.511 "subsystem": "bdev", 00:14:16.511 "config": [ 00:14:16.511 { 00:14:16.511 "params": { 00:14:16.511 "block_size": 512, 00:14:16.511 "num_blocks": 2097152, 00:14:16.511 "name": "malloc0" 00:14:16.511 }, 00:14:16.511 "method": "bdev_malloc_create" 00:14:16.511 }, 00:14:16.511 { 00:14:16.511 "params": { 00:14:16.511 "io_mechanism": "io_uring", 00:14:16.511 "filename": "/dev/nullb0", 00:14:16.511 "name": "null0" 00:14:16.511 }, 00:14:16.511 "method": "bdev_xnvme_create" 00:14:16.511 }, 00:14:16.511 { 00:14:16.511 "method": "bdev_wait_for_examine" 00:14:16.512 } 00:14:16.512 ] 00:14:16.512 } 00:14:16.512 ] 00:14:16.512 } 00:14:16.512 [2024-11-15 11:25:59.301672] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:16.512 [2024-11-15 11:25:59.302192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70451 ] 00:14:16.771 [2024-11-15 11:25:59.473014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.771 [2024-11-15 11:25:59.602742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.304  [2024-11-15T11:26:03.187Z] Copying: 211/1024 [MB] (211 MBps) [2024-11-15T11:26:04.123Z] Copying: 425/1024 [MB] (214 MBps) [2024-11-15T11:26:05.059Z] Copying: 634/1024 [MB] (208 MBps) [2024-11-15T11:26:05.994Z] Copying: 841/1024 [MB] (206 MBps) [2024-11-15T11:26:09.279Z] Copying: 1024/1024 [MB] (average 209 MBps) 00:14:26.330 00:14:26.588 11:26:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:26.588 11:26:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:26.588 00:14:26.588 real 0m40.950s 00:14:26.588 user 0m35.066s 00:14:26.588 sys 0m5.262s 00:14:26.588 11:26:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:26.588 11:26:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 ************************************ 00:14:26.588 END TEST xnvme_to_malloc_dd_copy 00:14:26.588 ************************************ 00:14:26.588 11:26:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:26.588 11:26:09 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:26.588 11:26:09 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:26.588 11:26:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 ************************************ 00:14:26.588 START TEST xnvme_bdevperf 00:14:26.588 ************************************ 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.588 11:26:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 { 00:14:26.588 "subsystems": [ 00:14:26.588 { 00:14:26.588 "subsystem": "bdev", 00:14:26.588 "config": [ 00:14:26.588 { 00:14:26.588 "params": { 00:14:26.588 "io_mechanism": "libaio", 00:14:26.588 "filename": "/dev/nullb0", 00:14:26.588 "name": "null0" 00:14:26.588 }, 00:14:26.588 "method": "bdev_xnvme_create" 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "method": "bdev_wait_for_examine" 00:14:26.588 } 00:14:26.588 ] 00:14:26.588 } 00:14:26.588 ] 00:14:26.588 } 00:14:26.588 [2024-11-15 11:26:09.502461] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:26.588 [2024-11-15 11:26:09.503156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70589 ] 00:14:26.847 [2024-11-15 11:26:09.686670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.106 [2024-11-15 11:26:09.816103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.364 Running I/O for 5 seconds... 00:14:29.234 134400.00 IOPS, 525.00 MiB/s [2024-11-15T11:26:13.558Z] 133728.00 IOPS, 522.38 MiB/s [2024-11-15T11:26:14.492Z] 128874.67 IOPS, 503.42 MiB/s [2024-11-15T11:26:15.427Z] 128896.00 IOPS, 503.50 MiB/s 00:14:32.478 Latency(us) 00:14:32.478 [2024-11-15T11:26:15.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.478 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:32.478 null0 : 5.00 128012.77 500.05 0.00 0.00 496.86 123.81 2412.92 00:14:32.478 [2024-11-15T11:26:15.427Z] =================================================================================================================== 00:14:32.478 [2024-11-15T11:26:15.427Z] Total : 128012.77 500.05 0.00 0.00 496.86 123.81 2412.92 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.414 11:26:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.414 { 00:14:33.414 "subsystems": [ 00:14:33.414 { 00:14:33.414 "subsystem": "bdev", 00:14:33.414 "config": [ 00:14:33.414 { 00:14:33.414 "params": { 00:14:33.414 "io_mechanism": "io_uring", 00:14:33.414 "filename": "/dev/nullb0", 00:14:33.414 "name": "null0" 00:14:33.414 }, 00:14:33.414 "method": "bdev_xnvme_create" 00:14:33.414 }, 00:14:33.414 { 00:14:33.414 "method": "bdev_wait_for_examine" 00:14:33.414 } 00:14:33.414 ] 00:14:33.414 } 00:14:33.414 ] 00:14:33.414 } 00:14:33.414 [2024-11-15 11:26:16.299146] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:33.414 [2024-11-15 11:26:16.299330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:14:33.673 [2024-11-15 11:26:16.480636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.673 [2024-11-15 11:26:16.600927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.238 Running I/O for 5 seconds... 00:14:36.107 177280.00 IOPS, 692.50 MiB/s [2024-11-15T11:26:19.989Z] 176768.00 IOPS, 690.50 MiB/s [2024-11-15T11:26:21.364Z] 175594.67 IOPS, 685.92 MiB/s [2024-11-15T11:26:22.298Z] 174704.00 IOPS, 682.44 MiB/s 00:14:39.349 Latency(us) 00:14:39.349 [2024-11-15T11:26:22.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.349 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:39.349 null0 : 5.00 173733.04 678.64 0.00 0.00 365.48 192.70 1995.87 00:14:39.349 [2024-11-15T11:26:22.298Z] =================================================================================================================== 00:14:39.349 [2024-11-15T11:26:22.298Z] Total : 173733.04 678.64 0.00 0.00 365.48 192.70 1995.87 00:14:39.916 11:26:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:39.916 11:26:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:40.210 ************************************ 00:14:40.210 END TEST xnvme_bdevperf 00:14:40.210 ************************************ 00:14:40.210 00:14:40.210 real 0m13.546s 00:14:40.210 user 0m10.498s 00:14:40.210 sys 0m2.797s 00:14:40.210 11:26:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.210 11:26:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.210 ************************************ 00:14:40.210 END TEST nvme_xnvme 00:14:40.210 ************************************ 00:14:40.210 00:14:40.210 real 0m54.808s 00:14:40.210 user 0m45.724s 00:14:40.210 sys 0m8.198s 00:14:40.210 11:26:22 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:40.210 11:26:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.210 11:26:23 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.210 11:26:23 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:40.210 11:26:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:40.210 11:26:23 -- common/autotest_common.sh@10 -- # set +x 00:14:40.210 ************************************ 00:14:40.210 START TEST blockdev_xnvme 00:14:40.210 ************************************ 00:14:40.210 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:40.210 * Looking for test storage... 00:14:40.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:40.210 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:40.210 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:40.210 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:40.499 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:40.499 11:26:23 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.500 11:26:23 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.500 --rc genhtml_branch_coverage=1 00:14:40.500 --rc genhtml_function_coverage=1 00:14:40.500 --rc genhtml_legend=1 00:14:40.500 --rc geninfo_all_blocks=1 00:14:40.500 --rc geninfo_unexecuted_blocks=1 00:14:40.500 00:14:40.500 ' 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.500 --rc genhtml_branch_coverage=1 00:14:40.500 --rc genhtml_function_coverage=1 00:14:40.500 --rc genhtml_legend=1 00:14:40.500 --rc geninfo_all_blocks=1 00:14:40.500 --rc geninfo_unexecuted_blocks=1 00:14:40.500 00:14:40.500 ' 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.500 --rc genhtml_branch_coverage=1 00:14:40.500 --rc genhtml_function_coverage=1 00:14:40.500 --rc genhtml_legend=1 00:14:40.500 --rc geninfo_all_blocks=1 00:14:40.500 --rc geninfo_unexecuted_blocks=1 00:14:40.500 00:14:40.500 ' 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:40.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.500 --rc genhtml_branch_coverage=1 00:14:40.500 --rc genhtml_function_coverage=1 00:14:40.500 --rc genhtml_legend=1 00:14:40.500 --rc geninfo_all_blocks=1 00:14:40.500 --rc geninfo_unexecuted_blocks=1 00:14:40.500 00:14:40.500 ' 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70817 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70817 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70817 ']' 00:14:40.500 11:26:23 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:40.500 11:26:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.500 [2024-11-15 11:26:23.363380] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:40.500 [2024-11-15 11:26:23.363588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70817 ] 00:14:40.757 [2024-11-15 11:26:23.551023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.758 [2024-11-15 11:26:23.671440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.690 11:26:24 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:41.690 11:26:24 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:14:41.690 11:26:24 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:41.690 11:26:24 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:14:41.690 11:26:24 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:41.690 11:26:24 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:41.690 11:26:24 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:41.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:42.220 Waiting for block devices as requested 00:14:42.220 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.220 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.220 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:42.479 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:47.750 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.750 nvme0n1 00:14:47.750 nvme1n1 00:14:47.750 nvme2n1 00:14:47.750 nvme2n2 00:14:47.750 nvme2n3 00:14:47.750 nvme3n1 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:14:47.750 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.750 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "198d8ebb-e771-41ba-9c63-90ce5d6f01ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "198d8ebb-e771-41ba-9c63-90ce5d6f01ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8b211189-3b75-4095-ba63-85363c4761f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b211189-3b75-4095-ba63-85363c4761f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "58b19059-18da-4638-af85-c5a8cf31f5e9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "58b19059-18da-4638-af85-c5a8cf31f5e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d9c3d04d-e050-4f7b-b8f5-0142807f6817"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9c3d04d-e050-4f7b-b8f5-0142807f6817",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c669c177-993b-4906-9e51-edc1a44a60a9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c669c177-993b-4906-9e51-edc1a44a60a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "25ec8fd8-0723-46f4-8b51-f13383d82054"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "25ec8fd8-0723-46f4-8b51-f13383d82054",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:47.751 11:26:30 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 70817 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70817 ']' 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70817 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70817 00:14:47.751 killing process with pid 70817 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70817' 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70817 00:14:47.751 11:26:30 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70817 00:14:50.290 11:26:32 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:50.290 11:26:32 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:50.290 11:26:32 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:50.290 11:26:32 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.290 11:26:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.290 ************************************ 00:14:50.290 START TEST bdev_hello_world 00:14:50.290 ************************************ 00:14:50.290 11:26:32 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:50.290 [2024-11-15 11:26:32.784994] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:50.290 [2024-11-15 11:26:32.785197] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71183 ] 00:14:50.290 [2024-11-15 11:26:32.969682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.290 [2024-11-15 11:26:33.083346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.856 [2024-11-15 11:26:33.512515] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:50.856 [2024-11-15 11:26:33.512577] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:50.856 [2024-11-15 11:26:33.512615] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:50.856 [2024-11-15 11:26:33.515108] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:50.856 [2024-11-15 11:26:33.515566] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:50.856 [2024-11-15 11:26:33.515598] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:50.856 [2024-11-15 11:26:33.515853] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:50.856 00:14:50.856 [2024-11-15 11:26:33.515888] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:51.791 00:14:51.791 real 0m1.845s 00:14:51.791 user 0m1.452s 00:14:51.791 sys 0m0.275s 00:14:51.791 11:26:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:51.791 ************************************ 00:14:51.791 END TEST bdev_hello_world 00:14:51.791 ************************************ 00:14:51.791 11:26:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:51.791 11:26:34 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:51.791 11:26:34 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:51.791 11:26:34 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:51.791 11:26:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.791 ************************************ 00:14:51.791 START TEST bdev_bounds 00:14:51.791 ************************************ 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71225 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:51.791 Process bdevio pid: 71225 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71225' 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71225 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71225 ']' 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:51.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:51.791 11:26:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:51.791 [2024-11-15 11:26:34.688731] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:51.791 [2024-11-15 11:26:34.689810] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:14:52.050 [2024-11-15 11:26:34.873450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.308 [2024-11-15 11:26:35.004646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.308 [2024-11-15 11:26:35.004749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.308 [2024-11-15 11:26:35.004764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.877 11:26:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:52.877 11:26:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:14:52.877 11:26:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:52.877 I/O targets: 00:14:52.877 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:52.877 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:52.877 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.877 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.877 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:52.877 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:52.877 00:14:52.877 00:14:52.877 CUnit - A unit testing framework for C - Version 2.1-3 00:14:52.877 http://cunit.sourceforge.net/ 00:14:52.877 00:14:52.877 00:14:52.877 Suite: bdevio tests on: nvme3n1 00:14:52.877 Test: blockdev write read block ...passed 00:14:52.877 Test: blockdev write zeroes read block ...passed 00:14:52.877 Test: blockdev write zeroes read no split ...passed 00:14:53.137 Test: blockdev write zeroes read split ...passed 00:14:53.137 Test: blockdev write zeroes read split partial ...passed 00:14:53.137 Test: blockdev reset ...passed 00:14:53.137 Test: blockdev write read 8 blocks ...passed 00:14:53.137 Test: blockdev write read size > 128k ...passed 00:14:53.137 Test: blockdev write read invalid size ...passed 00:14:53.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.137 Test: blockdev write read max offset ...passed 00:14:53.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.137 Test: blockdev writev readv 8 blocks ...passed 00:14:53.137 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.137 Test: blockdev writev readv block ...passed 00:14:53.137 Test: blockdev writev readv size > 128k ...passed 00:14:53.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.137 Test: blockdev comparev and writev ...passed 00:14:53.137 Test: blockdev nvme passthru rw ...passed 00:14:53.137 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.137 Test: blockdev nvme admin passthru ...passed 00:14:53.137 Test: blockdev copy ...passed 00:14:53.137 Suite: bdevio tests on: nvme2n3 00:14:53.137 Test: blockdev write read block ...passed 00:14:53.137 Test: blockdev write zeroes read block ...passed 00:14:53.137 Test: blockdev write zeroes read no split ...passed 00:14:53.137 Test: blockdev write zeroes read split ...passed 00:14:53.137 Test: blockdev write zeroes read split partial ...passed 00:14:53.137 Test: blockdev reset ...passed 00:14:53.137 Test: blockdev write read 8 blocks ...passed 00:14:53.137 Test: blockdev write read size > 128k ...passed 00:14:53.137 Test: blockdev write read invalid size ...passed 00:14:53.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.137 Test: blockdev write read max offset ...passed 00:14:53.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.137 Test: blockdev writev readv 8 blocks ...passed 00:14:53.137 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.137 Test: blockdev writev readv block ...passed 00:14:53.137 Test: blockdev writev readv size > 128k ...passed 00:14:53.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.137 Test: blockdev comparev and writev ...passed 00:14:53.137 Test: blockdev nvme passthru rw ...passed 00:14:53.137 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.137 Test: blockdev nvme admin passthru ...passed 00:14:53.137 Test: blockdev copy ...passed 00:14:53.137 Suite: bdevio tests on: nvme2n2 00:14:53.137 Test: blockdev write read block ...passed 00:14:53.137 Test: blockdev write zeroes read block ...passed 00:14:53.137 Test: blockdev write zeroes read no split ...passed 00:14:53.137 Test: blockdev write zeroes read split ...passed 00:14:53.137 Test: blockdev write zeroes read split partial ...passed 00:14:53.137 Test: blockdev reset ...passed 00:14:53.137 Test: blockdev write read 8 blocks ...passed 00:14:53.137 Test: blockdev write read size > 128k ...passed 00:14:53.137 Test: blockdev write read invalid size ...passed 00:14:53.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.137 Test: blockdev write read max offset ...passed 00:14:53.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.137 Test: blockdev writev readv 8 blocks ...passed 00:14:53.137 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.137 Test: blockdev writev readv block ...passed 00:14:53.137 Test: blockdev writev readv size > 128k ...passed 00:14:53.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.137 Test: blockdev comparev and writev ...passed 00:14:53.137 Test: blockdev nvme passthru rw ...passed 00:14:53.137 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.137 Test: blockdev nvme admin passthru ...passed 00:14:53.137 Test: blockdev copy ...passed 00:14:53.137 Suite: bdevio tests on: nvme2n1 00:14:53.137 Test: blockdev write read block ...passed 00:14:53.137 Test: blockdev write zeroes read block ...passed 00:14:53.137 Test: blockdev write zeroes read no split ...passed 00:14:53.137 Test: blockdev write zeroes read split ...passed 00:14:53.137 Test: blockdev write zeroes read split partial ...passed 00:14:53.137 Test: blockdev reset ...passed 00:14:53.137 Test: blockdev write read 8 blocks ...passed 00:14:53.137 Test: blockdev write read size > 128k ...passed 00:14:53.137 Test: blockdev write read invalid size ...passed 00:14:53.137 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.137 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.137 Test: blockdev write read max offset ...passed 00:14:53.137 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.137 Test: blockdev writev readv 8 blocks ...passed 00:14:53.137 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.137 Test: blockdev writev readv block ...passed 00:14:53.137 Test: blockdev writev readv size > 128k ...passed 00:14:53.137 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.137 Test: blockdev comparev and writev ...passed 00:14:53.137 Test: blockdev nvme passthru rw ...passed 00:14:53.137 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.137 Test: blockdev nvme admin passthru ...passed 00:14:53.137 Test: blockdev copy ...passed 00:14:53.137 Suite: bdevio tests on: nvme1n1 00:14:53.137 Test: blockdev write read block ...passed 00:14:53.137 Test: blockdev write zeroes read block ...passed 00:14:53.396 Test: blockdev write zeroes read no split ...passed 00:14:53.396 Test: blockdev write zeroes read split ...passed 00:14:53.396 Test: blockdev write zeroes read split partial ...passed 00:14:53.396 Test: blockdev reset ...passed 00:14:53.396 Test: blockdev write read 8 blocks ...passed 00:14:53.396 Test: blockdev write read size > 128k ...passed 00:14:53.396 Test: blockdev write read invalid size ...passed 00:14:53.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.396 Test: blockdev write read max offset ...passed 00:14:53.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.396 Test: blockdev writev readv 8 blocks ...passed 00:14:53.396 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.396 Test: blockdev writev readv block ...passed 00:14:53.396 Test: blockdev writev readv size > 128k ...passed 00:14:53.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.396 Test: blockdev comparev and writev ...passed 00:14:53.396 Test: blockdev nvme passthru rw ...passed 00:14:53.396 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.396 Test: blockdev nvme admin passthru ...passed 00:14:53.396 Test: blockdev copy ...passed 00:14:53.396 Suite: bdevio tests on: nvme0n1 00:14:53.396 Test: blockdev write read block ...passed 00:14:53.396 Test: blockdev write zeroes read block ...passed 00:14:53.396 Test: blockdev write zeroes read no split ...passed 00:14:53.396 Test: blockdev write zeroes read split ...passed 00:14:53.396 Test: blockdev write zeroes read split partial ...passed 00:14:53.396 Test: blockdev reset ...passed 00:14:53.396 Test: blockdev write read 8 blocks ...passed 00:14:53.396 Test: blockdev write read size > 128k ...passed 00:14:53.396 Test: blockdev write read invalid size ...passed 00:14:53.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.396 Test: blockdev write read max offset ...passed 00:14:53.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.396 Test: blockdev writev readv 8 blocks ...passed 00:14:53.396 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.396 Test: blockdev writev readv block ...passed 00:14:53.396 Test: blockdev writev readv size > 128k ...passed 00:14:53.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.396 Test: blockdev comparev and writev ...passed 00:14:53.396 Test: blockdev nvme passthru rw ...passed 00:14:53.396 Test: blockdev nvme passthru vendor specific ...passed 00:14:53.396 Test: blockdev nvme admin passthru ...passed 00:14:53.396 Test: blockdev copy ...passed 00:14:53.396 00:14:53.396 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.396 suites 6 6 n/a 0 0 00:14:53.396 tests 138 138 138 0 0 00:14:53.396 asserts 780 780 780 0 n/a 00:14:53.396 00:14:53.396 Elapsed time = 1.098 seconds 00:14:53.396 0 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71225 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71225 ']' 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71225 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71225 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71225' 00:14:53.396 killing process with pid 71225 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71225 00:14:53.396 11:26:36 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71225 00:14:54.771 11:26:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:54.771 00:14:54.771 real 0m2.751s 00:14:54.771 user 0m6.905s 00:14:54.771 sys 0m0.441s 00:14:54.771 11:26:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.771 11:26:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:54.771 ************************************ 00:14:54.771 END TEST bdev_bounds 00:14:54.771 ************************************ 00:14:54.771 11:26:37 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:54.771 11:26:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:54.771 11:26:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.771 11:26:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.771 ************************************ 00:14:54.771 START TEST bdev_nbd 00:14:54.771 ************************************ 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71291 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71291 /var/tmp/spdk-nbd.sock 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71291 ']' 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:54.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:54.771 11:26:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.771 [2024-11-15 11:26:37.503774] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:14:54.771 [2024-11-15 11:26:37.503956] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.771 [2024-11-15 11:26:37.695764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.030 [2024-11-15 11:26:37.821739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.596 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.596 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:14:55.596 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:55.596 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.596 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:55.597 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.163 1+0 records in 00:14:56.163 1+0 records out 00:14:56.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531801 s, 7.7 MB/s 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:56.163 11:26:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:56.421 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:56.421 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:56.421 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:56.421 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:56.421 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.422 1+0 records in 00:14:56.422 1+0 records out 00:14:56.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797102 s, 5.1 MB/s 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:56.422 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.680 1+0 records in 00:14:56.680 1+0 records out 00:14:56.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588768 s, 7.0 MB/s 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:56.680 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.938 1+0 records in 00:14:56.938 1+0 records out 00:14:56.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511216 s, 8.0 MB/s 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:56.938 11:26:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.504 1+0 records in 00:14:57.504 1+0 records out 00:14:57.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000883169 s, 4.6 MB/s 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:57.504 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.763 1+0 records in 00:14:57.763 1+0 records out 00:14:57.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590665 s, 6.9 MB/s 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:57.763 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd0", 00:14:58.022 "bdev_name": "nvme0n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd1", 00:14:58.022 "bdev_name": "nvme1n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd2", 00:14:58.022 "bdev_name": "nvme2n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd3", 00:14:58.022 "bdev_name": "nvme2n2" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd4", 00:14:58.022 "bdev_name": "nvme2n3" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd5", 00:14:58.022 "bdev_name": "nvme3n1" 00:14:58.022 } 00:14:58.022 ]' 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd0", 00:14:58.022 "bdev_name": "nvme0n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd1", 00:14:58.022 "bdev_name": "nvme1n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd2", 00:14:58.022 "bdev_name": "nvme2n1" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd3", 00:14:58.022 "bdev_name": "nvme2n2" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd4", 00:14:58.022 "bdev_name": "nvme2n3" 00:14:58.022 }, 00:14:58.022 { 00:14:58.022 "nbd_device": "/dev/nbd5", 00:14:58.022 "bdev_name": "nvme3n1" 00:14:58.022 } 00:14:58.022 ]' 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.022 11:26:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.281 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.547 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.821 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.078 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:59.079 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.079 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.079 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.079 11:26:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.337 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:59.595 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.854 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:00.112 11:26:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:00.371 /dev/nbd0 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.371 1+0 records in 00:15:00.371 1+0 records out 00:15:00.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523301 s, 7.8 MB/s 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:00.371 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:00.629 /dev/nbd1 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.629 1+0 records in 00:15:00.629 1+0 records out 00:15:00.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645435 s, 6.3 MB/s 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:00.629 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:00.887 /dev/nbd10 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.887 1+0 records in 00:15:00.887 1+0 records out 00:15:00.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512107 s, 8.0 MB/s 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:00.887 11:26:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:01.146 /dev/nbd11 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.146 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.404 1+0 records in 00:15:01.404 1+0 records out 00:15:01.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526448 s, 7.8 MB/s 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.404 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:01.663 /dev/nbd12 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.663 1+0 records in 00:15:01.663 1+0 records out 00:15:01.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064047 s, 6.4 MB/s 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.663 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:01.924 /dev/nbd13 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.924 1+0 records in 00:15:01.924 1+0 records out 00:15:01.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681417 s, 6.0 MB/s 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.924 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:01.925 11:26:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:02.183 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:02.183 { 00:15:02.183 "nbd_device": "/dev/nbd0", 00:15:02.183 "bdev_name": "nvme0n1" 00:15:02.183 }, 00:15:02.183 { 00:15:02.183 "nbd_device": "/dev/nbd1", 00:15:02.183 "bdev_name": "nvme1n1" 00:15:02.183 }, 00:15:02.183 { 00:15:02.183 "nbd_device": "/dev/nbd10", 00:15:02.183 "bdev_name": "nvme2n1" 00:15:02.183 }, 00:15:02.183 { 00:15:02.184 "nbd_device": "/dev/nbd11", 00:15:02.184 "bdev_name": "nvme2n2" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd12", 00:15:02.184 "bdev_name": "nvme2n3" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd13", 00:15:02.184 "bdev_name": "nvme3n1" 00:15:02.184 } 00:15:02.184 ]' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd0", 00:15:02.184 "bdev_name": "nvme0n1" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd1", 00:15:02.184 "bdev_name": "nvme1n1" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd10", 00:15:02.184 "bdev_name": "nvme2n1" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd11", 00:15:02.184 "bdev_name": "nvme2n2" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd12", 00:15:02.184 "bdev_name": "nvme2n3" 00:15:02.184 }, 00:15:02.184 { 00:15:02.184 "nbd_device": "/dev/nbd13", 00:15:02.184 "bdev_name": "nvme3n1" 00:15:02.184 } 00:15:02.184 ]' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:02.184 /dev/nbd1 00:15:02.184 /dev/nbd10 00:15:02.184 /dev/nbd11 00:15:02.184 /dev/nbd12 00:15:02.184 /dev/nbd13' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:02.184 /dev/nbd1 00:15:02.184 /dev/nbd10 00:15:02.184 /dev/nbd11 00:15:02.184 /dev/nbd12 00:15:02.184 /dev/nbd13' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:02.184 256+0 records in 00:15:02.184 256+0 records out 00:15:02.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00807807 s, 130 MB/s 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.184 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:02.442 256+0 records in 00:15:02.442 256+0 records out 00:15:02.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129675 s, 8.1 MB/s 00:15:02.442 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.442 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:02.699 256+0 records in 00:15:02.699 256+0 records out 00:15:02.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168246 s, 6.2 MB/s 00:15:02.699 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.699 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:02.699 256+0 records in 00:15:02.699 256+0 records out 00:15:02.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134975 s, 7.8 MB/s 00:15:02.700 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.700 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:02.957 256+0 records in 00:15:02.957 256+0 records out 00:15:02.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152555 s, 6.9 MB/s 00:15:02.957 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.957 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:02.957 256+0 records in 00:15:02.958 256+0 records out 00:15:02.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157764 s, 6.6 MB/s 00:15:02.958 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:02.958 11:26:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:03.216 256+0 records in 00:15:03.216 256+0 records out 00:15:03.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155227 s, 6.8 MB/s 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.216 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.781 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.782 11:26:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.348 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.606 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:04.864 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.122 11:26:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:05.381 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:05.639 malloc_lvol_verify 00:15:05.896 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:06.154 5a7d0c5c-4eb7-4789-9951-c1e9b8f94995 00:15:06.154 11:26:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:06.412 22a03611-09fd-4194-ae26-c5464629f0fc 00:15:06.412 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:06.670 /dev/nbd0 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:06.670 mke2fs 1.47.0 (5-Feb-2023) 00:15:06.670 Discarding device blocks: 0/4096 done 00:15:06.670 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:06.670 00:15:06.670 Allocating group tables: 0/1 done 00:15:06.670 Writing inode tables: 0/1 done 00:15:06.670 Creating journal (1024 blocks): done 00:15:06.670 Writing superblocks and filesystem accounting information: 0/1 done 00:15:06.670 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.670 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71291 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71291 ']' 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71291 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71291 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:07.250 killing process with pid 71291 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71291' 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71291 00:15:07.250 11:26:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71291 00:15:08.184 11:26:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:08.184 00:15:08.184 real 0m13.663s 00:15:08.184 user 0m19.565s 00:15:08.184 sys 0m4.370s 00:15:08.184 11:26:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:08.184 11:26:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:08.184 ************************************ 00:15:08.184 END TEST bdev_nbd 00:15:08.184 ************************************ 00:15:08.184 11:26:51 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:08.184 11:26:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:08.184 11:26:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:08.184 11:26:51 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:08.184 11:26:51 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:08.184 11:26:51 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.184 11:26:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.184 ************************************ 00:15:08.184 START TEST bdev_fio 00:15:08.184 ************************************ 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:08.184 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:15:08.184 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:08.443 ************************************ 00:15:08.443 START TEST bdev_fio_rw_verify 00:15:08.443 ************************************ 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:08.443 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:08.444 11:26:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:08.702 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:08.702 fio-3.35 00:15:08.702 Starting 6 threads 00:15:20.909 00:15:20.909 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71721: Fri Nov 15 11:27:02 2024 00:15:20.909 read: IOPS=29.1k, BW=114MiB/s (119MB/s)(1136MiB/10001msec) 00:15:20.909 slat (usec): min=3, max=1017, avg= 6.94, stdev= 4.40 00:15:20.909 clat (usec): min=99, max=4802, avg=657.09, stdev=216.97 00:15:20.909 lat (usec): min=106, max=4808, avg=664.04, stdev=217.57 00:15:20.909 clat percentiles (usec): 00:15:20.909 | 50.000th=[ 693], 99.000th=[ 1156], 99.900th=[ 1778], 99.990th=[ 3949], 00:15:20.909 | 99.999th=[ 4817] 00:15:20.909 write: IOPS=29.4k, BW=115MiB/s (121MB/s)(1150MiB/10001msec); 0 zone resets 00:15:20.909 slat (usec): min=9, max=1564, avg=23.94, stdev=23.95 00:15:20.909 clat (usec): min=85, max=3832, avg=733.80, stdev=211.13 00:15:20.910 lat (usec): min=102, max=3859, avg=757.74, stdev=212.49 00:15:20.910 clat percentiles (usec): 00:15:20.910 | 50.000th=[ 750], 99.000th=[ 1287], 99.900th=[ 1729], 99.990th=[ 2343], 00:15:20.910 | 99.999th=[ 3785] 00:15:20.910 bw ( KiB/s): min=98319, max=145742, per=100.00%, avg=118282.84, stdev=2208.36, samples=114 00:15:20.910 iops : min=24579, max=36435, avg=29570.42, stdev=552.09, samples=114 00:15:20.910 lat (usec) : 100=0.01%, 250=2.60%, 500=15.53%, 750=39.66%, 1000=37.10% 00:15:20.910 lat (msec) : 2=5.07%, 4=0.05%, 10=0.01% 00:15:20.910 cpu : usr=62.33%, sys=25.45%, ctx=7155, majf=0, minf=24765 00:15:20.910 IO depths : 1=12.0%, 2=24.4%, 4=50.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.910 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.910 issued rwts: total=290936,294275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:20.910 00:15:20.910 Run status group 0 (all jobs): 00:15:20.910 READ: bw=114MiB/s (119MB/s), 114MiB/s-114MiB/s (119MB/s-119MB/s), io=1136MiB (1192MB), run=10001-10001msec 00:15:20.910 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1150MiB (1205MB), run=10001-10001msec 00:15:20.910 ----------------------------------------------------- 00:15:20.910 Suppressions used: 00:15:20.910 count bytes template 00:15:20.910 6 48 /usr/src/fio/parse.c 00:15:20.910 3134 300864 /usr/src/fio/iolog.c 00:15:20.910 1 8 libtcmalloc_minimal.so 00:15:20.910 1 904 libcrypto.so 00:15:20.910 ----------------------------------------------------- 00:15:20.910 00:15:20.910 00:15:20.910 real 0m12.448s 00:15:20.910 user 0m39.281s 00:15:20.910 sys 0m15.706s 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.910 ************************************ 00:15:20.910 END TEST bdev_fio_rw_verify 00:15:20.910 ************************************ 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:15:20.910 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "198d8ebb-e771-41ba-9c63-90ce5d6f01ed"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "198d8ebb-e771-41ba-9c63-90ce5d6f01ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "8b211189-3b75-4095-ba63-85363c4761f1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "8b211189-3b75-4095-ba63-85363c4761f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "58b19059-18da-4638-af85-c5a8cf31f5e9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "58b19059-18da-4638-af85-c5a8cf31f5e9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "d9c3d04d-e050-4f7b-b8f5-0142807f6817"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d9c3d04d-e050-4f7b-b8f5-0142807f6817",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "c669c177-993b-4906-9e51-edc1a44a60a9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c669c177-993b-4906-9e51-edc1a44a60a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "25ec8fd8-0723-46f4-8b51-f13383d82054"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "25ec8fd8-0723-46f4-8b51-f13383d82054",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:20.911 /home/vagrant/spdk_repo/spdk 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:20.911 00:15:20.911 real 0m12.625s 00:15:20.911 user 0m39.373s 00:15:20.911 sys 0m15.789s 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:20.911 11:27:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:20.911 ************************************ 00:15:20.911 END TEST bdev_fio 00:15:20.911 ************************************ 00:15:20.911 11:27:03 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:20.911 11:27:03 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:20.911 11:27:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:20.911 11:27:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:20.911 11:27:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.911 ************************************ 00:15:20.911 START TEST bdev_verify 00:15:20.911 ************************************ 00:15:20.911 11:27:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:21.169 [2024-11-15 11:27:03.866821] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:21.169 [2024-11-15 11:27:03.866971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71890 ] 00:15:21.169 [2024-11-15 11:27:04.046410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:21.428 [2024-11-15 11:27:04.227510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.428 [2024-11-15 11:27:04.227524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.995 Running I/O for 5 seconds... 00:15:24.306 22208.00 IOPS, 86.75 MiB/s [2024-11-15T11:27:08.189Z] 21712.00 IOPS, 84.81 MiB/s [2024-11-15T11:27:09.125Z] 22208.00 IOPS, 86.75 MiB/s [2024-11-15T11:27:10.062Z] 22200.00 IOPS, 86.72 MiB/s [2024-11-15T11:27:10.062Z] 22144.00 IOPS, 86.50 MiB/s 00:15:27.113 Latency(us) 00:15:27.113 [2024-11-15T11:27:10.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.113 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0xa0000 00:15:27.113 nvme0n1 : 5.05 1571.68 6.14 0.00 0.00 81287.69 16920.20 70063.94 00:15:27.113 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0xa0000 length 0xa0000 00:15:27.113 nvme0n1 : 5.06 1618.94 6.32 0.00 0.00 78918.57 16205.27 72923.69 00:15:27.113 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0xbd0bd 00:15:27.113 nvme1n1 : 5.05 2895.48 11.31 0.00 0.00 43860.27 5451.40 64821.06 00:15:27.113 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:27.113 nvme1n1 : 5.04 2962.85 11.57 0.00 0.00 42977.28 5540.77 58863.24 00:15:27.113 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0x80000 00:15:27.113 nvme2n1 : 5.05 1570.63 6.14 0.00 0.00 80918.05 19184.17 63391.19 00:15:27.113 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x80000 length 0x80000 00:15:27.113 nvme2n1 : 5.05 1623.66 6.34 0.00 0.00 78253.07 16324.42 71970.44 00:15:27.113 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0x80000 00:15:27.113 nvme2n2 : 5.08 1588.96 6.21 0.00 0.00 79825.92 8996.31 70540.57 00:15:27.113 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x80000 length 0x80000 00:15:27.113 nvme2n2 : 5.06 1642.92 6.42 0.00 0.00 77182.09 8936.73 67204.19 00:15:27.113 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0x80000 00:15:27.113 nvme2n3 : 5.08 1586.94 6.20 0.00 0.00 79760.72 7983.48 68157.44 00:15:27.113 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x80000 length 0x80000 00:15:27.113 nvme2n3 : 5.07 1641.18 6.41 0.00 0.00 77107.06 6345.08 69110.69 00:15:27.113 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x0 length 0x20000 00:15:27.113 nvme3n1 : 5.08 1588.26 6.20 0.00 0.00 79532.33 9651.67 68157.44 00:15:27.113 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.113 Verification LBA range: start 0x20000 length 0x20000 00:15:27.113 nvme3n1 : 5.07 1640.46 6.41 0.00 0.00 76987.87 5183.30 75306.82 00:15:27.113 [2024-11-15T11:27:10.062Z] =================================================================================================================== 00:15:27.113 [2024-11-15T11:27:10.062Z] Total : 21931.95 85.67 0.00 0.00 69483.13 5183.30 75306.82 00:15:28.049 00:15:28.049 real 0m7.097s 00:15:28.049 user 0m11.099s 00:15:28.049 sys 0m1.815s 00:15:28.049 11:27:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.049 11:27:10 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 ************************************ 00:15:28.049 END TEST bdev_verify 00:15:28.049 ************************************ 00:15:28.049 11:27:10 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:28.049 11:27:10 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:28.049 11:27:10 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.049 11:27:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 ************************************ 00:15:28.049 START TEST bdev_verify_big_io 00:15:28.049 ************************************ 00:15:28.049 11:27:10 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:28.365 [2024-11-15 11:27:11.040120] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:28.365 [2024-11-15 11:27:11.040298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71990 ] 00:15:28.365 [2024-11-15 11:27:11.226940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.622 [2024-11-15 11:27:11.351275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.622 [2024-11-15 11:27:11.351277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.188 Running I/O for 5 seconds... 00:15:35.003 1440.00 IOPS, 90.00 MiB/s [2024-11-15T11:27:18.211Z] 2928.00 IOPS, 183.00 MiB/s [2024-11-15T11:27:18.211Z] 3136.00 IOPS, 196.00 MiB/s 00:15:35.262 Latency(us) 00:15:35.262 [2024-11-15T11:27:18.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.262 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0xa000 00:15:35.262 nvme0n1 : 6.04 127.10 7.94 0.00 0.00 986229.29 70063.94 1029510.98 00:15:35.262 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0xa000 length 0xa000 00:15:35.262 nvme0n1 : 6.02 100.92 6.31 0.00 0.00 1228542.87 173491.67 1929379.84 00:15:35.262 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0xbd0b 00:15:35.262 nvme1n1 : 6.05 140.28 8.77 0.00 0.00 869377.83 29431.62 1037136.99 00:15:35.262 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:35.262 nvme1n1 : 6.02 159.56 9.97 0.00 0.00 753786.14 16681.89 754974.72 00:15:35.262 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0x8000 00:15:35.262 nvme2n1 : 6.03 129.97 8.12 0.00 0.00 909346.20 46947.61 1753981.67 00:15:35.262 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x8000 length 0x8000 00:15:35.262 nvme2n1 : 6.02 162.15 10.13 0.00 0.00 718002.37 23354.65 747348.71 00:15:35.262 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0x8000 00:15:35.262 nvme2n2 : 6.05 141.51 8.84 0.00 0.00 804753.13 48139.17 857925.82 00:15:35.262 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x8000 length 0x8000 00:15:35.262 nvme2n2 : 6.03 127.41 7.96 0.00 0.00 904356.93 24546.21 1601461.53 00:15:35.262 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0x8000 00:15:35.262 nvme2n3 : 6.04 127.25 7.95 0.00 0.00 870189.92 39559.91 1837867.75 00:15:35.262 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x8000 length 0x8000 00:15:35.262 nvme2n3 : 6.02 110.25 6.89 0.00 0.00 1020164.56 13107.20 2623346.50 00:15:35.262 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x0 length 0x2000 00:15:35.262 nvme3n1 : 6.05 137.41 8.59 0.00 0.00 783213.78 13881.72 1548079.48 00:15:35.262 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:35.262 Verification LBA range: start 0x2000 length 0x2000 00:15:35.262 nvme3n1 : 6.03 140.62 8.79 0.00 0.00 774990.67 15609.48 1372681.31 00:15:35.262 [2024-11-15T11:27:18.211Z] =================================================================================================================== 00:15:35.262 [2024-11-15T11:27:18.211Z] Total : 1604.42 100.28 0.00 0.00 869512.28 13107.20 2623346.50 00:15:36.639 00:15:36.639 real 0m8.296s 00:15:36.639 user 0m14.957s 00:15:36.639 sys 0m0.628s 00:15:36.639 11:27:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.639 ************************************ 00:15:36.639 END TEST bdev_verify_big_io 00:15:36.639 ************************************ 00:15:36.639 11:27:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.639 11:27:19 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.639 11:27:19 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:36.639 11:27:19 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.639 11:27:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.639 ************************************ 00:15:36.639 START TEST bdev_write_zeroes 00:15:36.639 ************************************ 00:15:36.639 11:27:19 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.639 [2024-11-15 11:27:19.388865] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:36.639 [2024-11-15 11:27:19.389129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72117 ] 00:15:36.639 [2024-11-15 11:27:19.572981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.897 [2024-11-15 11:27:19.686921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.464 Running I/O for 1 seconds... 00:15:38.398 79040.00 IOPS, 308.75 MiB/s 00:15:38.398 Latency(us) 00:15:38.398 [2024-11-15T11:27:21.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.398 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme0n1 : 1.02 12049.17 47.07 0.00 0.00 10610.99 6494.02 21805.61 00:15:38.398 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme1n1 : 1.02 18049.90 70.51 0.00 0.00 7076.39 4110.89 14120.03 00:15:38.398 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme2n1 : 1.03 11978.62 46.79 0.00 0.00 10610.59 5630.14 22758.87 00:15:38.398 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme2n2 : 1.03 11965.56 46.74 0.00 0.00 10609.88 5362.04 23116.33 00:15:38.398 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme2n3 : 1.03 11953.38 46.69 0.00 0.00 10612.70 5451.40 23473.80 00:15:38.398 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:38.398 nvme3n1 : 1.03 11941.95 46.65 0.00 0.00 10612.25 5332.25 23712.12 00:15:38.398 [2024-11-15T11:27:21.347Z] =================================================================================================================== 00:15:38.398 [2024-11-15T11:27:21.347Z] Total : 77938.57 304.45 0.00 0.00 9793.36 4110.89 23712.12 00:15:39.334 00:15:39.334 real 0m2.844s 00:15:39.334 user 0m2.039s 00:15:39.334 sys 0m0.631s 00:15:39.334 11:27:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.334 ************************************ 00:15:39.334 END TEST bdev_write_zeroes 00:15:39.334 ************************************ 00:15:39.334 11:27:22 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:39.334 11:27:22 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.335 11:27:22 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:39.335 11:27:22 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.335 11:27:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.335 ************************************ 00:15:39.335 START TEST bdev_json_nonenclosed 00:15:39.335 ************************************ 00:15:39.335 11:27:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:39.600 [2024-11-15 11:27:22.290103] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:39.600 [2024-11-15 11:27:22.290280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72160 ] 00:15:39.600 [2024-11-15 11:27:22.473786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.865 [2024-11-15 11:27:22.582931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.865 [2024-11-15 11:27:22.583118] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:39.865 [2024-11-15 11:27:22.583148] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:39.865 [2024-11-15 11:27:22.583161] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:40.123 00:15:40.123 real 0m0.651s 00:15:40.123 user 0m0.396s 00:15:40.123 sys 0m0.149s 00:15:40.123 11:27:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.123 11:27:22 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 ************************************ 00:15:40.123 END TEST bdev_json_nonenclosed 00:15:40.123 ************************************ 00:15:40.123 11:27:22 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.123 11:27:22 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:40.123 11:27:22 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.123 11:27:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:40.123 ************************************ 00:15:40.123 START TEST bdev_json_nonarray 00:15:40.123 ************************************ 00:15:40.123 11:27:22 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:40.123 [2024-11-15 11:27:22.994008] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:40.123 [2024-11-15 11:27:22.994282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72191 ] 00:15:40.382 [2024-11-15 11:27:23.180574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.382 [2024-11-15 11:27:23.312324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.382 [2024-11-15 11:27:23.312479] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:40.382 [2024-11-15 11:27:23.312508] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:40.382 [2024-11-15 11:27:23.312521] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:40.640 00:15:40.640 real 0m0.680s 00:15:40.640 user 0m0.421s 00:15:40.640 sys 0m0.153s 00:15:40.640 11:27:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.640 ************************************ 00:15:40.640 END TEST bdev_json_nonarray 00:15:40.640 ************************************ 00:15:40.640 11:27:23 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:40.898 11:27:23 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:41.465 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:42.031 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.031 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.031 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.289 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:42.289 00:15:42.289 real 1m2.043s 00:15:42.289 user 1m46.792s 00:15:42.289 sys 0m27.246s 00:15:42.289 11:27:25 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.289 ************************************ 00:15:42.289 END TEST blockdev_xnvme 00:15:42.289 ************************************ 00:15:42.289 11:27:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.289 11:27:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:42.289 11:27:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:42.289 11:27:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.289 11:27:25 -- common/autotest_common.sh@10 -- # set +x 00:15:42.289 ************************************ 00:15:42.289 START TEST ublk 00:15:42.289 ************************************ 00:15:42.289 11:27:25 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:42.289 * Looking for test storage... 00:15:42.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:42.289 11:27:25 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:42.289 11:27:25 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:42.289 11:27:25 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.548 11:27:25 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.548 11:27:25 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.548 11:27:25 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.548 11:27:25 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.548 11:27:25 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.548 11:27:25 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:42.548 11:27:25 ublk -- scripts/common.sh@345 -- # : 1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.548 11:27:25 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.548 11:27:25 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@353 -- # local d=1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.548 11:27:25 ublk -- scripts/common.sh@355 -- # echo 1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.548 11:27:25 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@353 -- # local d=2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.548 11:27:25 ublk -- scripts/common.sh@355 -- # echo 2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.548 11:27:25 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.548 11:27:25 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.548 11:27:25 ublk -- scripts/common.sh@368 -- # return 0 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:42.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.548 --rc genhtml_branch_coverage=1 00:15:42.548 --rc genhtml_function_coverage=1 00:15:42.548 --rc genhtml_legend=1 00:15:42.548 --rc geninfo_all_blocks=1 00:15:42.548 --rc geninfo_unexecuted_blocks=1 00:15:42.548 00:15:42.548 ' 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:42.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.548 --rc genhtml_branch_coverage=1 00:15:42.548 --rc genhtml_function_coverage=1 00:15:42.548 --rc genhtml_legend=1 00:15:42.548 --rc geninfo_all_blocks=1 00:15:42.548 --rc geninfo_unexecuted_blocks=1 00:15:42.548 00:15:42.548 ' 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:42.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.548 --rc genhtml_branch_coverage=1 00:15:42.548 --rc genhtml_function_coverage=1 00:15:42.548 --rc genhtml_legend=1 00:15:42.548 --rc geninfo_all_blocks=1 00:15:42.548 --rc geninfo_unexecuted_blocks=1 00:15:42.548 00:15:42.548 ' 00:15:42.548 11:27:25 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:42.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.549 --rc genhtml_branch_coverage=1 00:15:42.549 --rc genhtml_function_coverage=1 00:15:42.549 --rc genhtml_legend=1 00:15:42.549 --rc geninfo_all_blocks=1 00:15:42.549 --rc geninfo_unexecuted_blocks=1 00:15:42.549 00:15:42.549 ' 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:42.549 11:27:25 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:42.549 11:27:25 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:42.549 11:27:25 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:42.549 11:27:25 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:42.549 11:27:25 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:42.549 11:27:25 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:42.549 11:27:25 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:42.549 11:27:25 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:42.549 11:27:25 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:42.549 11:27:25 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:42.549 11:27:25 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.549 11:27:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:42.549 ************************************ 00:15:42.549 START TEST test_save_ublk_config 00:15:42.549 ************************************ 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72482 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72482 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72482 ']' 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.549 11:27:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:42.549 [2024-11-15 11:27:25.443968] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:42.549 [2024-11-15 11:27:25.444189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72482 ] 00:15:42.807 [2024-11-15 11:27:25.636496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.067 [2024-11-15 11:27:25.787145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:44.003 [2024-11-15 11:27:26.677111] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:44.003 [2024-11-15 11:27:26.678430] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:44.003 malloc0 00:15:44.003 [2024-11-15 11:27:26.755236] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:44.003 [2024-11-15 11:27:26.755362] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:44.003 [2024-11-15 11:27:26.755382] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:44.003 [2024-11-15 11:27:26.755393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:44.003 [2024-11-15 11:27:26.762117] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:44.003 [2024-11-15 11:27:26.762140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:44.003 [2024-11-15 11:27:26.770150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:44.003 [2024-11-15 11:27:26.770302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:44.003 [2024-11-15 11:27:26.794074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:44.003 0 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.003 11:27:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:44.263 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.263 11:27:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:44.263 "subsystems": [ 00:15:44.263 { 00:15:44.263 "subsystem": "fsdev", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "fsdev_set_opts", 00:15:44.263 "params": { 00:15:44.263 "fsdev_io_pool_size": 65535, 00:15:44.263 "fsdev_io_cache_size": 256 00:15:44.263 } 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "keyring", 00:15:44.263 "config": [] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "iobuf", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "iobuf_set_options", 00:15:44.263 "params": { 00:15:44.263 "small_pool_count": 8192, 00:15:44.263 "large_pool_count": 1024, 00:15:44.263 "small_bufsize": 8192, 00:15:44.263 "large_bufsize": 135168, 00:15:44.263 "enable_numa": false 00:15:44.263 } 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "sock", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "sock_set_default_impl", 00:15:44.263 "params": { 00:15:44.263 "impl_name": "posix" 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "sock_impl_set_options", 00:15:44.263 "params": { 00:15:44.263 "impl_name": "ssl", 00:15:44.263 "recv_buf_size": 4096, 00:15:44.263 "send_buf_size": 4096, 00:15:44.263 "enable_recv_pipe": true, 00:15:44.263 "enable_quickack": false, 00:15:44.263 "enable_placement_id": 0, 00:15:44.263 "enable_zerocopy_send_server": true, 00:15:44.263 "enable_zerocopy_send_client": false, 00:15:44.263 "zerocopy_threshold": 0, 00:15:44.263 "tls_version": 0, 00:15:44.263 "enable_ktls": false 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "sock_impl_set_options", 00:15:44.263 "params": { 00:15:44.263 "impl_name": "posix", 00:15:44.263 "recv_buf_size": 2097152, 00:15:44.263 "send_buf_size": 2097152, 00:15:44.263 "enable_recv_pipe": true, 00:15:44.263 "enable_quickack": false, 00:15:44.263 "enable_placement_id": 0, 00:15:44.263 "enable_zerocopy_send_server": true, 00:15:44.263 "enable_zerocopy_send_client": false, 00:15:44.263 "zerocopy_threshold": 0, 00:15:44.263 "tls_version": 0, 00:15:44.263 "enable_ktls": false 00:15:44.263 } 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "vmd", 00:15:44.263 "config": [] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "accel", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "accel_set_options", 00:15:44.263 "params": { 00:15:44.263 "small_cache_size": 128, 00:15:44.263 "large_cache_size": 16, 00:15:44.263 "task_count": 2048, 00:15:44.263 "sequence_count": 2048, 00:15:44.263 "buf_count": 2048 00:15:44.263 } 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "bdev", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "bdev_set_options", 00:15:44.263 "params": { 00:15:44.263 "bdev_io_pool_size": 65535, 00:15:44.263 "bdev_io_cache_size": 256, 00:15:44.263 "bdev_auto_examine": true, 00:15:44.263 "iobuf_small_cache_size": 128, 00:15:44.263 "iobuf_large_cache_size": 16 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_raid_set_options", 00:15:44.263 "params": { 00:15:44.263 "process_window_size_kb": 1024, 00:15:44.263 "process_max_bandwidth_mb_sec": 0 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_iscsi_set_options", 00:15:44.263 "params": { 00:15:44.263 "timeout_sec": 30 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_nvme_set_options", 00:15:44.263 "params": { 00:15:44.263 "action_on_timeout": "none", 00:15:44.263 "timeout_us": 0, 00:15:44.263 "timeout_admin_us": 0, 00:15:44.263 "keep_alive_timeout_ms": 10000, 00:15:44.263 "arbitration_burst": 0, 00:15:44.263 "low_priority_weight": 0, 00:15:44.263 "medium_priority_weight": 0, 00:15:44.263 "high_priority_weight": 0, 00:15:44.263 "nvme_adminq_poll_period_us": 10000, 00:15:44.263 "nvme_ioq_poll_period_us": 0, 00:15:44.263 "io_queue_requests": 0, 00:15:44.263 "delay_cmd_submit": true, 00:15:44.263 "transport_retry_count": 4, 00:15:44.263 "bdev_retry_count": 3, 00:15:44.263 "transport_ack_timeout": 0, 00:15:44.263 "ctrlr_loss_timeout_sec": 0, 00:15:44.263 "reconnect_delay_sec": 0, 00:15:44.263 "fast_io_fail_timeout_sec": 0, 00:15:44.263 "disable_auto_failback": false, 00:15:44.263 "generate_uuids": false, 00:15:44.263 "transport_tos": 0, 00:15:44.263 "nvme_error_stat": false, 00:15:44.263 "rdma_srq_size": 0, 00:15:44.263 "io_path_stat": false, 00:15:44.263 "allow_accel_sequence": false, 00:15:44.263 "rdma_max_cq_size": 0, 00:15:44.263 "rdma_cm_event_timeout_ms": 0, 00:15:44.263 "dhchap_digests": [ 00:15:44.263 "sha256", 00:15:44.263 "sha384", 00:15:44.263 "sha512" 00:15:44.263 ], 00:15:44.263 "dhchap_dhgroups": [ 00:15:44.263 "null", 00:15:44.263 "ffdhe2048", 00:15:44.263 "ffdhe3072", 00:15:44.263 "ffdhe4096", 00:15:44.263 "ffdhe6144", 00:15:44.263 "ffdhe8192" 00:15:44.263 ] 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_nvme_set_hotplug", 00:15:44.263 "params": { 00:15:44.263 "period_us": 100000, 00:15:44.263 "enable": false 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_malloc_create", 00:15:44.263 "params": { 00:15:44.263 "name": "malloc0", 00:15:44.263 "num_blocks": 8192, 00:15:44.263 "block_size": 4096, 00:15:44.263 "physical_block_size": 4096, 00:15:44.263 "uuid": "b8aa54f6-474a-43b1-b9db-d0159eda0a55", 00:15:44.263 "optimal_io_boundary": 0, 00:15:44.263 "md_size": 0, 00:15:44.263 "dif_type": 0, 00:15:44.263 "dif_is_head_of_md": false, 00:15:44.263 "dif_pi_format": 0 00:15:44.263 } 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "method": "bdev_wait_for_examine" 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "scsi", 00:15:44.263 "config": null 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "scheduler", 00:15:44.263 "config": [ 00:15:44.263 { 00:15:44.263 "method": "framework_set_scheduler", 00:15:44.263 "params": { 00:15:44.263 "name": "static" 00:15:44.263 } 00:15:44.263 } 00:15:44.263 ] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "vhost_scsi", 00:15:44.263 "config": [] 00:15:44.263 }, 00:15:44.263 { 00:15:44.263 "subsystem": "vhost_blk", 00:15:44.263 "config": [] 00:15:44.263 }, 00:15:44.263 { 00:15:44.264 "subsystem": "ublk", 00:15:44.264 "config": [ 00:15:44.264 { 00:15:44.264 "method": "ublk_create_target", 00:15:44.264 "params": { 00:15:44.264 "cpumask": "1" 00:15:44.264 } 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "method": "ublk_start_disk", 00:15:44.264 "params": { 00:15:44.264 "bdev_name": "malloc0", 00:15:44.264 "ublk_id": 0, 00:15:44.264 "num_queues": 1, 00:15:44.264 "queue_depth": 128 00:15:44.264 } 00:15:44.264 } 00:15:44.264 ] 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "subsystem": "nbd", 00:15:44.264 "config": [] 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "subsystem": "nvmf", 00:15:44.264 "config": [ 00:15:44.264 { 00:15:44.264 "method": "nvmf_set_config", 00:15:44.264 "params": { 00:15:44.264 "discovery_filter": "match_any", 00:15:44.264 "admin_cmd_passthru": { 00:15:44.264 "identify_ctrlr": false 00:15:44.264 }, 00:15:44.264 "dhchap_digests": [ 00:15:44.264 "sha256", 00:15:44.264 "sha384", 00:15:44.264 "sha512" 00:15:44.264 ], 00:15:44.264 "dhchap_dhgroups": [ 00:15:44.264 "null", 00:15:44.264 "ffdhe2048", 00:15:44.264 "ffdhe3072", 00:15:44.264 "ffdhe4096", 00:15:44.264 "ffdhe6144", 00:15:44.264 "ffdhe8192" 00:15:44.264 ] 00:15:44.264 } 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "method": "nvmf_set_max_subsystems", 00:15:44.264 "params": { 00:15:44.264 "max_subsystems": 1024 00:15:44.264 } 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "method": "nvmf_set_crdt", 00:15:44.264 "params": { 00:15:44.264 "crdt1": 0, 00:15:44.264 "crdt2": 0, 00:15:44.264 "crdt3": 0 00:15:44.264 } 00:15:44.264 } 00:15:44.264 ] 00:15:44.264 }, 00:15:44.264 { 00:15:44.264 "subsystem": "iscsi", 00:15:44.264 "config": [ 00:15:44.264 { 00:15:44.264 "method": "iscsi_set_options", 00:15:44.264 "params": { 00:15:44.264 "node_base": "iqn.2016-06.io.spdk", 00:15:44.264 "max_sessions": 128, 00:15:44.264 "max_connections_per_session": 2, 00:15:44.264 "max_queue_depth": 64, 00:15:44.264 "default_time2wait": 2, 00:15:44.264 "default_time2retain": 20, 00:15:44.264 "first_burst_length": 8192, 00:15:44.264 "immediate_data": true, 00:15:44.264 "allow_duplicated_isid": false, 00:15:44.264 "error_recovery_level": 0, 00:15:44.264 "nop_timeout": 60, 00:15:44.264 "nop_in_interval": 30, 00:15:44.264 "disable_chap": false, 00:15:44.264 "require_chap": false, 00:15:44.264 "mutual_chap": false, 00:15:44.264 "chap_group": 0, 00:15:44.264 "max_large_datain_per_connection": 64, 00:15:44.264 "max_r2t_per_connection": 4, 00:15:44.264 "pdu_pool_size": 36864, 00:15:44.264 "immediate_data_pool_size": 16384, 00:15:44.264 "data_out_pool_size": 2048 00:15:44.264 } 00:15:44.264 } 00:15:44.264 ] 00:15:44.264 } 00:15:44.264 ] 00:15:44.264 }' 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72482 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72482 ']' 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72482 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72482 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:44.264 killing process with pid 72482 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72482' 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72482 00:15:44.264 11:27:27 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72482 00:15:46.165 [2024-11-15 11:27:28.814196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:46.165 [2024-11-15 11:27:28.846200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:46.165 [2024-11-15 11:27:28.846357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:46.165 [2024-11-15 11:27:28.856184] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:46.165 [2024-11-15 11:27:28.856246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:46.165 [2024-11-15 11:27:28.856268] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:46.165 [2024-11-15 11:27:28.856302] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:46.165 [2024-11-15 11:27:28.856505] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72547 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72547 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72547 ']' 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:47.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.542 11:27:30 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:47.542 "subsystems": [ 00:15:47.542 { 00:15:47.542 "subsystem": "fsdev", 00:15:47.542 "config": [ 00:15:47.542 { 00:15:47.542 "method": "fsdev_set_opts", 00:15:47.542 "params": { 00:15:47.542 "fsdev_io_pool_size": 65535, 00:15:47.542 "fsdev_io_cache_size": 256 00:15:47.542 } 00:15:47.542 } 00:15:47.542 ] 00:15:47.542 }, 00:15:47.542 { 00:15:47.542 "subsystem": "keyring", 00:15:47.542 "config": [] 00:15:47.542 }, 00:15:47.542 { 00:15:47.542 "subsystem": "iobuf", 00:15:47.542 "config": [ 00:15:47.542 { 00:15:47.542 "method": "iobuf_set_options", 00:15:47.542 "params": { 00:15:47.542 "small_pool_count": 8192, 00:15:47.542 "large_pool_count": 1024, 00:15:47.542 "small_bufsize": 8192, 00:15:47.542 "large_bufsize": 135168, 00:15:47.542 "enable_numa": false 00:15:47.542 } 00:15:47.542 } 00:15:47.542 ] 00:15:47.542 }, 00:15:47.542 { 00:15:47.542 "subsystem": "sock", 00:15:47.542 "config": [ 00:15:47.542 { 00:15:47.542 "method": "sock_set_default_impl", 00:15:47.542 "params": { 00:15:47.542 "impl_name": "posix" 00:15:47.542 } 00:15:47.542 }, 00:15:47.542 { 00:15:47.542 "method": "sock_impl_set_options", 00:15:47.542 "params": { 00:15:47.542 "impl_name": "ssl", 00:15:47.542 "recv_buf_size": 4096, 00:15:47.542 "send_buf_size": 4096, 00:15:47.542 "enable_recv_pipe": true, 00:15:47.542 "enable_quickack": false, 00:15:47.542 "enable_placement_id": 0, 00:15:47.542 "enable_zerocopy_send_server": true, 00:15:47.542 "enable_zerocopy_send_client": false, 00:15:47.542 "zerocopy_threshold": 0, 00:15:47.542 "tls_version": 0, 00:15:47.542 "enable_ktls": false 00:15:47.542 } 00:15:47.542 }, 00:15:47.542 { 00:15:47.542 "method": "sock_impl_set_options", 00:15:47.542 "params": { 00:15:47.542 "impl_name": "posix", 00:15:47.542 "recv_buf_size": 2097152, 00:15:47.542 "send_buf_size": 2097152, 00:15:47.542 "enable_recv_pipe": true, 00:15:47.542 "enable_quickack": false, 00:15:47.542 "enable_placement_id": 0, 00:15:47.542 "enable_zerocopy_send_server": true, 00:15:47.542 "enable_zerocopy_send_client": false, 00:15:47.542 "zerocopy_threshold": 0, 00:15:47.543 "tls_version": 0, 00:15:47.543 "enable_ktls": false 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "vmd", 00:15:47.543 "config": [] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "accel", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "accel_set_options", 00:15:47.543 "params": { 00:15:47.543 "small_cache_size": 128, 00:15:47.543 "large_cache_size": 16, 00:15:47.543 "task_count": 2048, 00:15:47.543 "sequence_count": 2048, 00:15:47.543 "buf_count": 2048 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "bdev", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "bdev_set_options", 00:15:47.543 "params": { 00:15:47.543 "bdev_io_pool_size": 65535, 00:15:47.543 "bdev_io_cache_size": 256, 00:15:47.543 "bdev_auto_examine": true, 00:15:47.543 "iobuf_small_cache_size": 128, 00:15:47.543 "iobuf_large_cache_size": 16 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_raid_set_options", 00:15:47.543 "params": { 00:15:47.543 "process_window_size_kb": 1024, 00:15:47.543 "process_max_bandwidth_mb_sec": 0 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_iscsi_set_options", 00:15:47.543 "params": { 00:15:47.543 "timeout_sec": 30 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_nvme_set_options", 00:15:47.543 "params": { 00:15:47.543 "action_on_timeout": "none", 00:15:47.543 "timeout_us": 0, 00:15:47.543 "timeout_admin_us": 0, 00:15:47.543 "keep_alive_timeout_ms": 10000, 00:15:47.543 "arbitration_burst": 0, 00:15:47.543 "low_priority_weight": 0, 00:15:47.543 "medium_priority_weight": 0, 00:15:47.543 "high_priority_weight": 0, 00:15:47.543 "nvme_adminq_poll_period_us": 10000, 00:15:47.543 "nvme_ioq_poll_period_us": 0, 00:15:47.543 "io_queue_requests": 0, 00:15:47.543 "delay_cmd_submit": true, 00:15:47.543 "transport_retry_count": 4, 00:15:47.543 "bdev_retry_count": 3, 00:15:47.543 "transport_ack_timeout": 0, 00:15:47.543 "ctrlr_loss_timeout_sec": 0, 00:15:47.543 "reconnect_delay_sec": 0, 00:15:47.543 "fast_io_fail_timeout_sec": 0, 00:15:47.543 "disable_auto_failback": false, 00:15:47.543 "generate_uuids": false, 00:15:47.543 "transport_tos": 0, 00:15:47.543 "nvme_error_stat": false, 00:15:47.543 "rdma_srq_size": 0, 00:15:47.543 "io_path_stat": false, 00:15:47.543 "allow_accel_sequence": false, 00:15:47.543 "rdma_max_cq_size": 0, 00:15:47.543 "rdma_cm_event_timeout_ms": 0, 00:15:47.543 "dhchap_digests": [ 00:15:47.543 "sha256", 00:15:47.543 "sha384", 00:15:47.543 "sha512" 00:15:47.543 ], 00:15:47.543 "dhchap_dhgroups": [ 00:15:47.543 "null", 00:15:47.543 "ffdhe2048", 00:15:47.543 "ffdhe3072", 00:15:47.543 "ffdhe4096", 00:15:47.543 "ffdhe6144", 00:15:47.543 "ffdhe8192" 00:15:47.543 ] 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_nvme_set_hotplug", 00:15:47.543 "params": { 00:15:47.543 "period_us": 100000, 00:15:47.543 "enable": false 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_malloc_create", 00:15:47.543 "params": { 00:15:47.543 "name": "malloc0", 00:15:47.543 "num_blocks": 8192, 00:15:47.543 "block_size": 4096, 00:15:47.543 "physical_block_size": 4096, 00:15:47.543 "uuid": "b8aa54f6-474a-43b1-b9db-d0159eda0a55", 00:15:47.543 "optimal_io_boundary": 0, 00:15:47.543 "md_size": 0, 00:15:47.543 "dif_type": 0, 00:15:47.543 "dif_is_head_of_md": false, 00:15:47.543 "dif_pi_format": 0 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "bdev_wait_for_examine" 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "scsi", 00:15:47.543 "config": null 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "scheduler", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "framework_set_scheduler", 00:15:47.543 "params": { 00:15:47.543 "name": "static" 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "vhost_scsi", 00:15:47.543 "config": [] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "vhost_blk", 00:15:47.543 "config": [] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "ublk", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "ublk_create_target", 00:15:47.543 "params": { 00:15:47.543 "cpumask": "1" 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "ublk_start_disk", 00:15:47.543 "params": { 00:15:47.543 "bdev_name": "malloc0", 00:15:47.543 "ublk_id": 0, 00:15:47.543 "num_queues": 1, 00:15:47.543 "queue_depth": 128 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "nbd", 00:15:47.543 "config": [] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "nvmf", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "nvmf_set_config", 00:15:47.543 "params": { 00:15:47.543 "discovery_filter": "match_any", 00:15:47.543 "admin_cmd_passthru": { 00:15:47.543 "identify_ctrlr": false 00:15:47.543 }, 00:15:47.543 "dhchap_digests": [ 00:15:47.543 "sha256", 00:15:47.543 "sha384", 00:15:47.543 "sha512" 00:15:47.543 ], 00:15:47.543 "dhchap_dhgroups": [ 00:15:47.543 "null", 00:15:47.543 "ffdhe2048", 00:15:47.543 "ffdhe3072", 00:15:47.543 "ffdhe4096", 00:15:47.543 "ffdhe6144", 00:15:47.543 "ffdhe81 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.543 92" 00:15:47.543 ] 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "nvmf_set_max_subsystems", 00:15:47.543 "params": { 00:15:47.543 "max_subsystems": 1024 00:15:47.543 } 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "method": "nvmf_set_crdt", 00:15:47.543 "params": { 00:15:47.543 "crdt1": 0, 00:15:47.543 "crdt2": 0, 00:15:47.543 "crdt3": 0 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }, 00:15:47.543 { 00:15:47.543 "subsystem": "iscsi", 00:15:47.543 "config": [ 00:15:47.543 { 00:15:47.543 "method": "iscsi_set_options", 00:15:47.543 "params": { 00:15:47.543 "node_base": "iqn.2016-06.io.spdk", 00:15:47.543 "max_sessions": 128, 00:15:47.543 "max_connections_per_session": 2, 00:15:47.543 "max_queue_depth": 64, 00:15:47.543 "default_time2wait": 2, 00:15:47.543 "default_time2retain": 20, 00:15:47.543 "first_burst_length": 8192, 00:15:47.543 "immediate_data": true, 00:15:47.543 "allow_duplicated_isid": false, 00:15:47.543 "error_recovery_level": 0, 00:15:47.543 "nop_timeout": 60, 00:15:47.543 "nop_in_interval": 30, 00:15:47.543 "disable_chap": false, 00:15:47.543 "require_chap": false, 00:15:47.543 "mutual_chap": false, 00:15:47.543 "chap_group": 0, 00:15:47.543 "max_large_datain_per_connection": 64, 00:15:47.543 "max_r2t_per_connection": 4, 00:15:47.543 "pdu_pool_size": 36864, 00:15:47.543 "immediate_data_pool_size": 16384, 00:15:47.543 "data_out_pool_size": 2048 00:15:47.543 } 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 } 00:15:47.543 ] 00:15:47.543 }' 00:15:47.543 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:47.543 11:27:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 [2024-11-15 11:27:30.563791] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:47.802 [2024-11-15 11:27:30.563982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72547 ] 00:15:47.802 [2024-11-15 11:27:30.746415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.061 [2024-11-15 11:27:30.865155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.994 [2024-11-15 11:27:31.856081] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:48.994 [2024-11-15 11:27:31.857315] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:48.994 [2024-11-15 11:27:31.863278] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:48.994 [2024-11-15 11:27:31.863386] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:48.994 [2024-11-15 11:27:31.863404] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:48.994 [2024-11-15 11:27:31.863412] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:48.994 [2024-11-15 11:27:31.871111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:48.994 [2024-11-15 11:27:31.871139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:48.994 [2024-11-15 11:27:31.878113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:48.994 [2024-11-15 11:27:31.878228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:48.994 [2024-11-15 11:27:31.902071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:48.994 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72547 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72547 ']' 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72547 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:49.252 11:27:31 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72547 00:15:49.252 11:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:49.252 killing process with pid 72547 00:15:49.252 11:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:49.252 11:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72547' 00:15:49.252 11:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72547 00:15:49.252 11:27:32 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72547 00:15:51.150 [2024-11-15 11:27:33.914523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.150 [2024-11-15 11:27:33.946255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.150 [2024-11-15 11:27:33.946448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.150 [2024-11-15 11:27:33.954188] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.150 [2024-11-15 11:27:33.954265] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:51.150 [2024-11-15 11:27:33.954278] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:51.150 [2024-11-15 11:27:33.954310] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:51.150 [2024-11-15 11:27:33.954534] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:53.064 11:27:35 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:53.064 00:15:53.064 real 0m10.326s 00:15:53.064 user 0m7.168s 00:15:53.064 sys 0m4.124s 00:15:53.064 ************************************ 00:15:53.064 END TEST test_save_ublk_config 00:15:53.064 ************************************ 00:15:53.064 11:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.064 11:27:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:53.064 11:27:35 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72639 00:15:53.064 11:27:35 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:53.064 11:27:35 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:53.064 11:27:35 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72639 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@833 -- # '[' -z 72639 ']' 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.064 11:27:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.064 [2024-11-15 11:27:35.821295] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:15:53.064 [2024-11-15 11:27:35.821501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72639 ] 00:15:53.353 [2024-11-15 11:27:36.005997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:53.353 [2024-11-15 11:27:36.133865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.353 [2024-11-15 11:27:36.133882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.286 11:27:37 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.286 11:27:37 ublk -- common/autotest_common.sh@866 -- # return 0 00:15:54.286 11:27:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:54.286 11:27:37 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:54.286 11:27:37 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:54.286 11:27:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.286 ************************************ 00:15:54.286 START TEST test_create_ublk 00:15:54.286 ************************************ 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:15:54.286 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.286 [2024-11-15 11:27:37.025094] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:54.286 [2024-11-15 11:27:37.027988] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.286 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:54.286 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.286 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.544 [2024-11-15 11:27:37.314253] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:54.544 [2024-11-15 11:27:37.314784] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:54.544 [2024-11-15 11:27:37.314803] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:54.544 [2024-11-15 11:27:37.314813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:54.544 [2024-11-15 11:27:37.322428] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:54.544 [2024-11-15 11:27:37.322454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:54.544 [2024-11-15 11:27:37.329071] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:54.544 [2024-11-15 11:27:37.329891] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:54.544 [2024-11-15 11:27:37.343481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.544 11:27:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:54.544 { 00:15:54.544 "ublk_device": "/dev/ublkb0", 00:15:54.544 "id": 0, 00:15:54.544 "queue_depth": 512, 00:15:54.544 "num_queues": 4, 00:15:54.544 "bdev_name": "Malloc0" 00:15:54.544 } 00:15:54.544 ]' 00:15:54.544 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:54.545 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:54.545 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:54.545 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:54.545 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:54.802 11:27:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:54.802 11:27:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:54.802 11:27:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:54.802 11:27:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:54.803 11:27:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:54.803 fio: verification read phase will never start because write phase uses all of runtime 00:15:54.803 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:54.803 fio-3.35 00:15:54.803 Starting 1 process 00:16:06.995 00:16:06.995 fio_test: (groupid=0, jobs=1): err= 0: pid=72690: Fri Nov 15 11:27:47 2024 00:16:06.995 write: IOPS=12.2k, BW=47.5MiB/s (49.9MB/s)(475MiB/10001msec); 0 zone resets 00:16:06.995 clat (usec): min=52, max=4091, avg=80.72, stdev=124.46 00:16:06.995 lat (usec): min=53, max=4091, avg=81.47, stdev=124.46 00:16:06.995 clat percentiles (usec): 00:16:06.995 | 1.00th=[ 60], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:16:06.995 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 73], 60.00th=[ 74], 00:16:06.995 | 70.00th=[ 76], 80.00th=[ 78], 90.00th=[ 84], 95.00th=[ 90], 00:16:06.995 | 99.00th=[ 108], 99.50th=[ 118], 99.90th=[ 2573], 99.95th=[ 3163], 00:16:06.995 | 99.99th=[ 3720] 00:16:06.995 bw ( KiB/s): min=47896, max=53720, per=100.00%, avg=48809.68, stdev=1248.84, samples=19 00:16:06.995 iops : min=11974, max=13430, avg=12202.42, stdev=312.21, samples=19 00:16:06.995 lat (usec) : 100=98.35%, 250=1.32%, 500=0.01%, 750=0.02%, 1000=0.02% 00:16:06.995 lat (msec) : 2=0.12%, 4=0.16%, 10=0.01% 00:16:06.995 cpu : usr=3.39%, sys=8.05%, ctx=121727, majf=0, minf=796 00:16:06.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.995 issued rwts: total=0,121724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.995 00:16:06.995 Run status group 0 (all jobs): 00:16:06.995 WRITE: bw=47.5MiB/s (49.9MB/s), 47.5MiB/s-47.5MiB/s (49.9MB/s-49.9MB/s), io=475MiB (499MB), run=10001-10001msec 00:16:06.995 00:16:06.995 Disk stats (read/write): 00:16:06.995 ublkb0: ios=0/120522, merge=0/0, ticks=0/8845, in_queue=8845, util=99.11% 00:16:06.995 11:27:47 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 [2024-11-15 11:27:47.874832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:06.995 [2024-11-15 11:27:47.921159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:06.995 [2024-11-15 11:27:47.922048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:06.995 [2024-11-15 11:27:47.929239] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:06.995 [2024-11-15 11:27:47.929552] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:06.995 [2024-11-15 11:27:47.929571] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.995 11:27:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 [2024-11-15 11:27:47.945199] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:06.995 request: 00:16:06.995 { 00:16:06.995 "ublk_id": 0, 00:16:06.995 "method": "ublk_stop_disk", 00:16:06.995 "req_id": 1 00:16:06.995 } 00:16:06.995 Got JSON-RPC error response 00:16:06.995 response: 00:16:06.995 { 00:16:06.995 "code": -19, 00:16:06.995 "message": "No such device" 00:16:06.995 } 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:06.995 11:27:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.995 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.995 [2024-11-15 11:27:47.959189] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:06.996 [2024-11-15 11:27:47.966050] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:06.996 [2024-11-15 11:27:47.966101] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:06.996 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:06.996 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:06.996 11:27:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:06.996 00:16:06.996 real 0m11.757s 00:16:06.996 user 0m0.790s 00:16:06.996 sys 0m0.921s 00:16:06.996 ************************************ 00:16:06.996 END TEST test_create_ublk 00:16:06.996 ************************************ 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.996 11:27:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:06.996 11:27:48 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:06.996 11:27:48 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.996 11:27:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 ************************************ 00:16:06.996 START TEST test_create_multi_ublk 00:16:06.996 ************************************ 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 [2024-11-15 11:27:48.828081] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:06.996 [2024-11-15 11:27:48.830818] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 [2024-11-15 11:27:49.134255] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:06.996 [2024-11-15 11:27:49.134789] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:06.996 [2024-11-15 11:27:49.134813] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:06.996 [2024-11-15 11:27:49.134829] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.996 [2024-11-15 11:27:49.150062] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.996 [2024-11-15 11:27:49.150095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.996 [2024-11-15 11:27:49.157101] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.996 [2024-11-15 11:27:49.157930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:06.996 [2024-11-15 11:27:49.172161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 [2024-11-15 11:27:49.464233] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:06.996 [2024-11-15 11:27:49.464744] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:06.996 [2024-11-15 11:27:49.464762] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:06.996 [2024-11-15 11:27:49.464771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.996 [2024-11-15 11:27:49.475092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.996 [2024-11-15 11:27:49.475118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.996 [2024-11-15 11:27:49.483061] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.996 [2024-11-15 11:27:49.483882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:06.996 [2024-11-15 11:27:49.499107] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:06.996 [2024-11-15 11:27:49.792267] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:06.996 [2024-11-15 11:27:49.792813] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:06.996 [2024-11-15 11:27:49.792835] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:06.996 [2024-11-15 11:27:49.792859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.996 [2024-11-15 11:27:49.800082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.996 [2024-11-15 11:27:49.800112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.996 [2024-11-15 11:27:49.806139] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.996 [2024-11-15 11:27:49.806952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:06.996 [2024-11-15 11:27:49.812927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.996 11:27:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 [2024-11-15 11:27:50.096337] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:07.255 [2024-11-15 11:27:50.096902] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:07.255 [2024-11-15 11:27:50.096919] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:07.255 [2024-11-15 11:27:50.096928] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.255 [2024-11-15 11:27:50.104663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.255 [2024-11-15 11:27:50.104691] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.255 [2024-11-15 11:27:50.111125] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.255 [2024-11-15 11:27:50.111940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:07.255 [2024-11-15 11:27:50.118013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:07.255 { 00:16:07.255 "ublk_device": "/dev/ublkb0", 00:16:07.255 "id": 0, 00:16:07.255 "queue_depth": 512, 00:16:07.255 "num_queues": 4, 00:16:07.255 "bdev_name": "Malloc0" 00:16:07.255 }, 00:16:07.255 { 00:16:07.255 "ublk_device": "/dev/ublkb1", 00:16:07.255 "id": 1, 00:16:07.255 "queue_depth": 512, 00:16:07.255 "num_queues": 4, 00:16:07.255 "bdev_name": "Malloc1" 00:16:07.255 }, 00:16:07.255 { 00:16:07.255 "ublk_device": "/dev/ublkb2", 00:16:07.255 "id": 2, 00:16:07.255 "queue_depth": 512, 00:16:07.255 "num_queues": 4, 00:16:07.255 "bdev_name": "Malloc2" 00:16:07.255 }, 00:16:07.255 { 00:16:07.255 "ublk_device": "/dev/ublkb3", 00:16:07.255 "id": 3, 00:16:07.255 "queue_depth": 512, 00:16:07.255 "num_queues": 4, 00:16:07.255 "bdev_name": "Malloc3" 00:16:07.255 } 00:16:07.255 ]' 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:07.255 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.514 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:07.843 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.114 11:27:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:08.114 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:08.114 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.373 [2024-11-15 11:27:51.258469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.373 [2024-11-15 11:27:51.299654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.373 [2024-11-15 11:27:51.300872] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.373 [2024-11-15 11:27:51.306072] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.373 [2024-11-15 11:27:51.306397] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:08.373 [2024-11-15 11:27:51.306418] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.373 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.373 [2024-11-15 11:27:51.319207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.632 [2024-11-15 11:27:51.351729] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.632 [2024-11-15 11:27:51.352856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.632 [2024-11-15 11:27:51.360200] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.632 [2024-11-15 11:27:51.360523] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:08.632 [2024-11-15 11:27:51.360541] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.632 [2024-11-15 11:27:51.376195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.632 [2024-11-15 11:27:51.417164] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.632 [2024-11-15 11:27:51.418186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.632 [2024-11-15 11:27:51.425084] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.632 [2024-11-15 11:27:51.425411] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:08.632 [2024-11-15 11:27:51.425429] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:08.632 [2024-11-15 11:27:51.438272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:08.632 [2024-11-15 11:27:51.476183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:08.632 [2024-11-15 11:27:51.477159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:08.632 [2024-11-15 11:27:51.485161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:08.632 [2024-11-15 11:27:51.485495] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:08.632 [2024-11-15 11:27:51.485512] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.632 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:08.891 [2024-11-15 11:27:51.778157] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:08.891 [2024-11-15 11:27:51.786081] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:08.891 [2024-11-15 11:27:51.786129] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:08.891 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:08.891 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:08.891 11:27:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:08.891 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.891 11:27:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:09.825 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.825 11:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:09.825 11:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:09.825 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.825 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.083 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.083 11:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.083 11:27:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:10.083 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.083 11:27:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.342 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.342 11:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:10.342 11:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:10.342 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.342 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:10.601 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:10.859 11:27:53 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:10.859 00:16:10.859 real 0m4.759s 00:16:10.859 user 0m1.395s 00:16:10.859 sys 0m0.187s 00:16:10.859 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:10.859 ************************************ 00:16:10.859 END TEST test_create_multi_ublk 00:16:10.859 ************************************ 00:16:10.859 11:27:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.859 11:27:53 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:10.859 11:27:53 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:10.859 11:27:53 ublk -- ublk/ublk.sh@130 -- # killprocess 72639 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@952 -- # '[' -z 72639 ']' 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@956 -- # kill -0 72639 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@957 -- # uname 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72639 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:10.859 killing process with pid 72639 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72639' 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@971 -- # kill 72639 00:16:10.859 11:27:53 ublk -- common/autotest_common.sh@976 -- # wait 72639 00:16:11.794 [2024-11-15 11:27:54.681299] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:11.794 [2024-11-15 11:27:54.681376] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:13.169 00:16:13.169 real 0m30.757s 00:16:13.169 user 0m43.482s 00:16:13.169 sys 0m11.333s 00:16:13.169 11:27:55 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:13.169 ************************************ 00:16:13.169 END TEST ublk 00:16:13.169 ************************************ 00:16:13.169 11:27:55 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:13.169 11:27:55 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:13.169 11:27:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:13.169 11:27:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:13.169 11:27:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.169 ************************************ 00:16:13.169 START TEST ublk_recovery 00:16:13.169 ************************************ 00:16:13.169 11:27:55 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:13.169 * Looking for test storage... 00:16:13.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:13.169 11:27:55 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:13.169 11:27:55 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:13.169 11:27:55 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.169 11:27:56 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:13.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.169 --rc genhtml_branch_coverage=1 00:16:13.169 --rc genhtml_function_coverage=1 00:16:13.169 --rc genhtml_legend=1 00:16:13.169 --rc geninfo_all_blocks=1 00:16:13.169 --rc geninfo_unexecuted_blocks=1 00:16:13.169 00:16:13.169 ' 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:13.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.169 --rc genhtml_branch_coverage=1 00:16:13.169 --rc genhtml_function_coverage=1 00:16:13.169 --rc genhtml_legend=1 00:16:13.169 --rc geninfo_all_blocks=1 00:16:13.169 --rc geninfo_unexecuted_blocks=1 00:16:13.169 00:16:13.169 ' 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:13.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.169 --rc genhtml_branch_coverage=1 00:16:13.169 --rc genhtml_function_coverage=1 00:16:13.169 --rc genhtml_legend=1 00:16:13.169 --rc geninfo_all_blocks=1 00:16:13.169 --rc geninfo_unexecuted_blocks=1 00:16:13.169 00:16:13.169 ' 00:16:13.169 11:27:56 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:13.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.169 --rc genhtml_branch_coverage=1 00:16:13.169 --rc genhtml_function_coverage=1 00:16:13.170 --rc genhtml_legend=1 00:16:13.170 --rc geninfo_all_blocks=1 00:16:13.170 --rc geninfo_unexecuted_blocks=1 00:16:13.170 00:16:13.170 ' 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:13.170 11:27:56 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73060 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:13.170 11:27:56 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73060 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73060 ']' 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:13.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:13.170 11:27:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.428 [2024-11-15 11:27:56.215178] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:13.428 [2024-11-15 11:27:56.216019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:16:13.687 [2024-11-15 11:27:56.400634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.687 [2024-11-15 11:27:56.528201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.687 [2024-11-15 11:27:56.528202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:16:14.620 11:27:57 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.620 [2024-11-15 11:27:57.364102] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:14.620 [2024-11-15 11:27:57.366941] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.620 11:27:57 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.620 malloc0 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.620 11:27:57 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:14.620 [2024-11-15 11:27:57.511298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:14.620 [2024-11-15 11:27:57.511486] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:14.620 [2024-11-15 11:27:57.511522] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:14.620 [2024-11-15 11:27:57.511535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:14.620 [2024-11-15 11:27:57.519261] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:14.620 [2024-11-15 11:27:57.519291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:14.620 [2024-11-15 11:27:57.527062] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:14.620 [2024-11-15 11:27:57.527239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:14.620 [2024-11-15 11:27:57.543105] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:14.620 1 00:16:14.620 11:27:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.620 11:27:57 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:15.993 11:27:58 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73095 00:16:15.993 11:27:58 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:15.993 11:27:58 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:15.993 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.993 fio-3.35 00:16:15.993 Starting 1 process 00:16:21.259 11:28:03 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73060 00:16:21.259 11:28:03 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:26.550 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73060 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:26.550 11:28:08 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73206 00:16:26.550 11:28:08 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:26.550 11:28:08 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:26.550 11:28:08 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73206 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73206 ']' 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:26.550 11:28:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:26.550 [2024-11-15 11:28:08.712095] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:16:26.550 [2024-11-15 11:28:08.712253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:16:26.550 [2024-11-15 11:28:08.894457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.550 [2024-11-15 11:28:09.046756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.550 [2024-11-15 11:28:09.046773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:16:27.116 11:28:09 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.116 [2024-11-15 11:28:09.926088] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:27.116 [2024-11-15 11:28:09.929088] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.116 11:28:09 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.116 11:28:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.374 malloc0 00:16:27.374 11:28:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.374 11:28:10 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:27.374 11:28:10 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.374 11:28:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:27.374 [2024-11-15 11:28:10.079267] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:27.374 [2024-11-15 11:28:10.079323] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:27.374 [2024-11-15 11:28:10.079340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:27.374 [2024-11-15 11:28:10.087142] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:27.374 [2024-11-15 11:28:10.087175] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:27.374 1 00:16:27.374 11:28:10 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.374 11:28:10 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73095 00:16:28.309 [2024-11-15 11:28:11.087202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:28.309 [2024-11-15 11:28:11.092141] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:28.309 [2024-11-15 11:28:11.092168] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:29.245 [2024-11-15 11:28:12.092212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:29.245 [2024-11-15 11:28:12.096119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:29.245 [2024-11-15 11:28:12.096156] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:30.179 [2024-11-15 11:28:13.096193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:30.179 [2024-11-15 11:28:13.102092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:30.179 [2024-11-15 11:28:13.102121] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:30.179 [2024-11-15 11:28:13.102138] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:30.179 [2024-11-15 11:28:13.102281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:52.100 [2024-11-15 11:28:33.785099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:52.100 [2024-11-15 11:28:33.792880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:52.100 [2024-11-15 11:28:33.800330] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:52.100 [2024-11-15 11:28:33.800360] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:18.772 00:17:18.772 fio_test: (groupid=0, jobs=1): err= 0: pid=73102: Fri Nov 15 11:28:58 2024 00:17:18.772 read: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(2365MiB/60003msec) 00:17:18.772 slat (usec): min=2, max=624, avg= 6.26, stdev= 3.27 00:17:18.772 clat (usec): min=987, max=30253k, avg=6185.10, stdev=306119.62 00:17:18.772 lat (usec): min=993, max=30253k, avg=6191.36, stdev=306119.62 00:17:18.772 clat percentiles (msec): 00:17:18.772 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:18.772 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:17:18.772 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:17:18.772 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 10], 99.95th=[ 14], 00:17:18.772 | 99.99th=[17113] 00:17:18.772 bw ( KiB/s): min= 496, max=91336, per=100.00%, avg=79513.12, stdev=13937.25, samples=60 00:17:18.772 iops : min= 124, max=22834, avg=19878.27, stdev=3484.31, samples=60 00:17:18.772 write: IOPS=10.1k, BW=39.4MiB/s (41.3MB/s)(2362MiB/60003msec); 0 zone resets 00:17:18.772 slat (usec): min=2, max=847, avg= 6.45, stdev= 3.41 00:17:18.772 clat (usec): min=922, max=30254k, avg=6496.39, stdev=316036.51 00:17:18.772 lat (usec): min=927, max=30254k, avg=6502.84, stdev=316036.51 00:17:18.772 clat percentiles (msec): 00:17:18.772 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:17:18.772 | 30.00th=[ 4], 40.00th=[ 4], 50.00th=[ 4], 60.00th=[ 4], 00:17:18.772 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 4], 00:17:18.772 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 14], 00:17:18.772 | 99.99th=[17113] 00:17:18.772 bw ( KiB/s): min= 528, max=92536, per=100.00%, avg=79429.15, stdev=13946.62, samples=60 00:17:18.772 iops : min= 132, max=23134, avg=19857.28, stdev=3486.65, samples=60 00:17:18.772 lat (usec) : 1000=0.01% 00:17:18.772 lat (msec) : 2=0.10%, 4=94.90%, 10=4.91%, 20=0.08%, >=2000=0.01% 00:17:18.772 cpu : usr=5.52%, sys=12.06%, ctx=36083, majf=0, minf=13 00:17:18.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:18.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:18.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:18.772 issued rwts: total=605322,604568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:18.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:18.772 00:17:18.772 Run status group 0 (all jobs): 00:17:18.772 READ: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=2365MiB (2479MB), run=60003-60003msec 00:17:18.772 WRITE: bw=39.4MiB/s (41.3MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=2362MiB (2476MB), run=60003-60003msec 00:17:18.772 00:17:18.772 Disk stats (read/write): 00:17:18.772 ublkb1: ios=603027/602340, merge=0/0, ticks=3679258/3794781, in_queue=7474040, util=99.92% 00:17:18.772 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.772 [2024-11-15 11:28:58.830550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:18.772 [2024-11-15 11:28:58.875232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:18.772 [2024-11-15 11:28:58.875616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:18.772 [2024-11-15 11:28:58.886066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:18.772 [2024-11-15 11:28:58.886201] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:18.772 [2024-11-15 11:28:58.886215] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.772 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.772 [2024-11-15 11:28:58.902262] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:18.772 [2024-11-15 11:28:58.910088] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:18.772 [2024-11-15 11:28:58.910157] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.772 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:18.772 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:18.772 11:28:58 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73206 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73206 ']' 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73206 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73206 00:17:18.772 killing process with pid 73206 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.772 11:28:58 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.773 11:28:58 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73206' 00:17:18.773 11:28:58 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73206 00:17:18.773 11:28:58 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73206 00:17:18.773 [2024-11-15 11:29:00.449655] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:18.773 [2024-11-15 11:29:00.449723] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:19.031 ************************************ 00:17:19.031 END TEST ublk_recovery 00:17:19.031 ************************************ 00:17:19.031 00:17:19.031 real 1m5.857s 00:17:19.031 user 1m50.813s 00:17:19.031 sys 0m20.507s 00:17:19.031 11:29:01 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:19.031 11:29:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.031 11:29:01 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:19.031 11:29:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:19.031 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.031 11:29:01 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:19.031 11:29:01 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:19.031 11:29:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:19.031 11:29:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:19.031 11:29:01 -- common/autotest_common.sh@10 -- # set +x 00:17:19.031 ************************************ 00:17:19.031 START TEST ftl 00:17:19.031 ************************************ 00:17:19.031 11:29:01 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:19.031 * Looking for test storage... 00:17:19.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.031 11:29:01 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:19.031 11:29:01 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:17:19.031 11:29:01 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.290 11:29:02 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.290 11:29:02 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.290 11:29:02 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.290 11:29:02 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.290 11:29:02 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.290 11:29:02 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:19.290 11:29:02 ftl -- scripts/common.sh@345 -- # : 1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.290 11:29:02 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.290 11:29:02 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@353 -- # local d=1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.290 11:29:02 ftl -- scripts/common.sh@355 -- # echo 1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.290 11:29:02 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@353 -- # local d=2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.290 11:29:02 ftl -- scripts/common.sh@355 -- # echo 2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.290 11:29:02 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.290 11:29:02 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.290 11:29:02 ftl -- scripts/common.sh@368 -- # return 0 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.290 --rc genhtml_branch_coverage=1 00:17:19.290 --rc genhtml_function_coverage=1 00:17:19.290 --rc genhtml_legend=1 00:17:19.290 --rc geninfo_all_blocks=1 00:17:19.290 --rc geninfo_unexecuted_blocks=1 00:17:19.290 00:17:19.290 ' 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.290 --rc genhtml_branch_coverage=1 00:17:19.290 --rc genhtml_function_coverage=1 00:17:19.290 --rc genhtml_legend=1 00:17:19.290 --rc geninfo_all_blocks=1 00:17:19.290 --rc geninfo_unexecuted_blocks=1 00:17:19.290 00:17:19.290 ' 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.290 --rc genhtml_branch_coverage=1 00:17:19.290 --rc genhtml_function_coverage=1 00:17:19.290 --rc genhtml_legend=1 00:17:19.290 --rc geninfo_all_blocks=1 00:17:19.290 --rc geninfo_unexecuted_blocks=1 00:17:19.290 00:17:19.290 ' 00:17:19.290 11:29:02 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.290 --rc genhtml_branch_coverage=1 00:17:19.290 --rc genhtml_function_coverage=1 00:17:19.290 --rc genhtml_legend=1 00:17:19.290 --rc geninfo_all_blocks=1 00:17:19.290 --rc geninfo_unexecuted_blocks=1 00:17:19.290 00:17:19.290 ' 00:17:19.290 11:29:02 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:19.290 11:29:02 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:19.290 11:29:02 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.290 11:29:02 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.290 11:29:02 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:19.290 11:29:02 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:19.290 11:29:02 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.290 11:29:02 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:19.290 11:29:02 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:19.290 11:29:02 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.290 11:29:02 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.290 11:29:02 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:19.290 11:29:02 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:19.290 11:29:02 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:19.290 11:29:02 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:19.290 11:29:02 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:19.290 11:29:02 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:19.290 11:29:02 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.291 11:29:02 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.291 11:29:02 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:19.291 11:29:02 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:19.291 11:29:02 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:19.291 11:29:02 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:19.291 11:29:02 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:19.291 11:29:02 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:19.291 11:29:02 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:19.291 11:29:02 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:19.291 11:29:02 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:19.291 11:29:02 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:19.291 11:29:02 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:19.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:19.807 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:19.807 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:19.807 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:19.807 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:19.807 11:29:02 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74015 00:17:19.807 11:29:02 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:19.807 11:29:02 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74015 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@833 -- # '[' -z 74015 ']' 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:19.807 11:29:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:19.807 [2024-11-15 11:29:02.684744] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:19.807 [2024-11-15 11:29:02.684933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74015 ] 00:17:20.065 [2024-11-15 11:29:02.870651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.323 [2024-11-15 11:29:03.023539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.889 11:29:03 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.889 11:29:03 ftl -- common/autotest_common.sh@866 -- # return 0 00:17:20.889 11:29:03 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:21.146 11:29:03 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:22.081 11:29:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:22.081 11:29:05 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@50 -- # break 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:23.016 11:29:05 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:23.276 11:29:06 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:23.276 11:29:06 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:23.276 11:29:06 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:23.276 11:29:06 ftl -- ftl/ftl.sh@63 -- # break 00:17:23.276 11:29:06 ftl -- ftl/ftl.sh@66 -- # killprocess 74015 00:17:23.276 11:29:06 ftl -- common/autotest_common.sh@952 -- # '[' -z 74015 ']' 00:17:23.276 11:29:06 ftl -- common/autotest_common.sh@956 -- # kill -0 74015 00:17:23.276 11:29:06 ftl -- common/autotest_common.sh@957 -- # uname 00:17:23.276 11:29:06 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:23.276 11:29:06 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74015 00:17:23.534 killing process with pid 74015 00:17:23.534 11:29:06 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:23.534 11:29:06 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:23.534 11:29:06 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74015' 00:17:23.534 11:29:06 ftl -- common/autotest_common.sh@971 -- # kill 74015 00:17:23.534 11:29:06 ftl -- common/autotest_common.sh@976 -- # wait 74015 00:17:25.450 11:29:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:25.450 11:29:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.450 11:29:08 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:25.450 11:29:08 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:25.450 11:29:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.450 ************************************ 00:17:25.450 START TEST ftl_fio_basic 00:17:25.450 ************************************ 00:17:25.450 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:25.450 * Looking for test storage... 00:17:25.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.450 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:25.450 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:17:25.450 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.710 --rc genhtml_branch_coverage=1 00:17:25.710 --rc genhtml_function_coverage=1 00:17:25.710 --rc genhtml_legend=1 00:17:25.710 --rc geninfo_all_blocks=1 00:17:25.710 --rc geninfo_unexecuted_blocks=1 00:17:25.710 00:17:25.710 ' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.710 --rc genhtml_branch_coverage=1 00:17:25.710 --rc genhtml_function_coverage=1 00:17:25.710 --rc genhtml_legend=1 00:17:25.710 --rc geninfo_all_blocks=1 00:17:25.710 --rc geninfo_unexecuted_blocks=1 00:17:25.710 00:17:25.710 ' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.710 --rc genhtml_branch_coverage=1 00:17:25.710 --rc genhtml_function_coverage=1 00:17:25.710 --rc genhtml_legend=1 00:17:25.710 --rc geninfo_all_blocks=1 00:17:25.710 --rc geninfo_unexecuted_blocks=1 00:17:25.710 00:17:25.710 ' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:25.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.710 --rc genhtml_branch_coverage=1 00:17:25.710 --rc genhtml_function_coverage=1 00:17:25.710 --rc genhtml_legend=1 00:17:25.710 --rc geninfo_all_blocks=1 00:17:25.710 --rc geninfo_unexecuted_blocks=1 00:17:25.710 00:17:25.710 ' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74158 00:17:25.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74158 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 74158 ']' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.710 11:29:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:25.710 [2024-11-15 11:29:08.641644] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:17:25.710 [2024-11-15 11:29:08.642842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74158 ] 00:17:25.969 [2024-11-15 11:29:08.830855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.229 [2024-11-15 11:29:08.953156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.229 [2024-11-15 11:29:08.953299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.229 [2024-11-15 11:29:08.953321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:27.165 11:29:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:27.425 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:27.683 { 00:17:27.683 "name": "nvme0n1", 00:17:27.683 "aliases": [ 00:17:27.683 "6faaecce-614e-4519-8f8f-e170f41cf9d7" 00:17:27.683 ], 00:17:27.683 "product_name": "NVMe disk", 00:17:27.683 "block_size": 4096, 00:17:27.683 "num_blocks": 1310720, 00:17:27.683 "uuid": "6faaecce-614e-4519-8f8f-e170f41cf9d7", 00:17:27.683 "numa_id": -1, 00:17:27.683 "assigned_rate_limits": { 00:17:27.683 "rw_ios_per_sec": 0, 00:17:27.683 "rw_mbytes_per_sec": 0, 00:17:27.683 "r_mbytes_per_sec": 0, 00:17:27.683 "w_mbytes_per_sec": 0 00:17:27.683 }, 00:17:27.683 "claimed": false, 00:17:27.683 "zoned": false, 00:17:27.683 "supported_io_types": { 00:17:27.683 "read": true, 00:17:27.683 "write": true, 00:17:27.683 "unmap": true, 00:17:27.683 "flush": true, 00:17:27.683 "reset": true, 00:17:27.683 "nvme_admin": true, 00:17:27.683 "nvme_io": true, 00:17:27.683 "nvme_io_md": false, 00:17:27.683 "write_zeroes": true, 00:17:27.683 "zcopy": false, 00:17:27.683 "get_zone_info": false, 00:17:27.683 "zone_management": false, 00:17:27.683 "zone_append": false, 00:17:27.683 "compare": true, 00:17:27.683 "compare_and_write": false, 00:17:27.683 "abort": true, 00:17:27.683 "seek_hole": false, 00:17:27.683 "seek_data": false, 00:17:27.683 "copy": true, 00:17:27.683 "nvme_iov_md": false 00:17:27.683 }, 00:17:27.683 "driver_specific": { 00:17:27.683 "nvme": [ 00:17:27.683 { 00:17:27.683 "pci_address": "0000:00:11.0", 00:17:27.683 "trid": { 00:17:27.683 "trtype": "PCIe", 00:17:27.683 "traddr": "0000:00:11.0" 00:17:27.683 }, 00:17:27.683 "ctrlr_data": { 00:17:27.683 "cntlid": 0, 00:17:27.683 "vendor_id": "0x1b36", 00:17:27.683 "model_number": "QEMU NVMe Ctrl", 00:17:27.683 "serial_number": "12341", 00:17:27.683 "firmware_revision": "8.0.0", 00:17:27.683 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:27.683 "oacs": { 00:17:27.683 "security": 0, 00:17:27.683 "format": 1, 00:17:27.683 "firmware": 0, 00:17:27.683 "ns_manage": 1 00:17:27.683 }, 00:17:27.683 "multi_ctrlr": false, 00:17:27.683 "ana_reporting": false 00:17:27.683 }, 00:17:27.683 "vs": { 00:17:27.683 "nvme_version": "1.4" 00:17:27.683 }, 00:17:27.683 "ns_data": { 00:17:27.683 "id": 1, 00:17:27.683 "can_share": false 00:17:27.683 } 00:17:27.683 } 00:17:27.683 ], 00:17:27.683 "mp_policy": "active_passive" 00:17:27.683 } 00:17:27.683 } 00:17:27.683 ]' 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:27.683 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:27.941 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:27.941 11:29:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:28.199 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d5531fe5-8c65-494e-95db-f412f7aed588 00:17:28.199 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d5531fe5-8c65-494e-95db-f412f7aed588 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9f4db561-6c04-4684-af98-bf42a457139b 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9f4db561-6c04-4684-af98-bf42a457139b 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9f4db561-6c04-4684-af98-bf42a457139b 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9f4db561-6c04-4684-af98-bf42a457139b 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=9f4db561-6c04-4684-af98-bf42a457139b 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:28.764 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f4db561-6c04-4684-af98-bf42a457139b 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:29.022 { 00:17:29.022 "name": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:29.022 "aliases": [ 00:17:29.022 "lvs/nvme0n1p0" 00:17:29.022 ], 00:17:29.022 "product_name": "Logical Volume", 00:17:29.022 "block_size": 4096, 00:17:29.022 "num_blocks": 26476544, 00:17:29.022 "uuid": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:29.022 "assigned_rate_limits": { 00:17:29.022 "rw_ios_per_sec": 0, 00:17:29.022 "rw_mbytes_per_sec": 0, 00:17:29.022 "r_mbytes_per_sec": 0, 00:17:29.022 "w_mbytes_per_sec": 0 00:17:29.022 }, 00:17:29.022 "claimed": false, 00:17:29.022 "zoned": false, 00:17:29.022 "supported_io_types": { 00:17:29.022 "read": true, 00:17:29.022 "write": true, 00:17:29.022 "unmap": true, 00:17:29.022 "flush": false, 00:17:29.022 "reset": true, 00:17:29.022 "nvme_admin": false, 00:17:29.022 "nvme_io": false, 00:17:29.022 "nvme_io_md": false, 00:17:29.022 "write_zeroes": true, 00:17:29.022 "zcopy": false, 00:17:29.022 "get_zone_info": false, 00:17:29.022 "zone_management": false, 00:17:29.022 "zone_append": false, 00:17:29.022 "compare": false, 00:17:29.022 "compare_and_write": false, 00:17:29.022 "abort": false, 00:17:29.022 "seek_hole": true, 00:17:29.022 "seek_data": true, 00:17:29.022 "copy": false, 00:17:29.022 "nvme_iov_md": false 00:17:29.022 }, 00:17:29.022 "driver_specific": { 00:17:29.022 "lvol": { 00:17:29.022 "lvol_store_uuid": "d5531fe5-8c65-494e-95db-f412f7aed588", 00:17:29.022 "base_bdev": "nvme0n1", 00:17:29.022 "thin_provision": true, 00:17:29.022 "num_allocated_clusters": 0, 00:17:29.022 "snapshot": false, 00:17:29.022 "clone": false, 00:17:29.022 "esnap_clone": false 00:17:29.022 } 00:17:29.022 } 00:17:29.022 } 00:17:29.022 ]' 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:29.022 11:29:11 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9f4db561-6c04-4684-af98-bf42a457139b 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=9f4db561-6c04-4684-af98-bf42a457139b 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:29.281 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f4db561-6c04-4684-af98-bf42a457139b 00:17:29.539 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:29.539 { 00:17:29.539 "name": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:29.539 "aliases": [ 00:17:29.539 "lvs/nvme0n1p0" 00:17:29.539 ], 00:17:29.539 "product_name": "Logical Volume", 00:17:29.539 "block_size": 4096, 00:17:29.539 "num_blocks": 26476544, 00:17:29.539 "uuid": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:29.539 "assigned_rate_limits": { 00:17:29.539 "rw_ios_per_sec": 0, 00:17:29.539 "rw_mbytes_per_sec": 0, 00:17:29.539 "r_mbytes_per_sec": 0, 00:17:29.539 "w_mbytes_per_sec": 0 00:17:29.539 }, 00:17:29.539 "claimed": false, 00:17:29.539 "zoned": false, 00:17:29.539 "supported_io_types": { 00:17:29.539 "read": true, 00:17:29.539 "write": true, 00:17:29.539 "unmap": true, 00:17:29.539 "flush": false, 00:17:29.539 "reset": true, 00:17:29.539 "nvme_admin": false, 00:17:29.539 "nvme_io": false, 00:17:29.539 "nvme_io_md": false, 00:17:29.539 "write_zeroes": true, 00:17:29.539 "zcopy": false, 00:17:29.539 "get_zone_info": false, 00:17:29.539 "zone_management": false, 00:17:29.539 "zone_append": false, 00:17:29.539 "compare": false, 00:17:29.539 "compare_and_write": false, 00:17:29.539 "abort": false, 00:17:29.539 "seek_hole": true, 00:17:29.539 "seek_data": true, 00:17:29.539 "copy": false, 00:17:29.539 "nvme_iov_md": false 00:17:29.539 }, 00:17:29.539 "driver_specific": { 00:17:29.539 "lvol": { 00:17:29.539 "lvol_store_uuid": "d5531fe5-8c65-494e-95db-f412f7aed588", 00:17:29.539 "base_bdev": "nvme0n1", 00:17:29.539 "thin_provision": true, 00:17:29.539 "num_allocated_clusters": 0, 00:17:29.539 "snapshot": false, 00:17:29.539 "clone": false, 00:17:29.539 "esnap_clone": false 00:17:29.539 } 00:17:29.539 } 00:17:29.539 } 00:17:29.539 ]' 00:17:29.539 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:29.798 11:29:12 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:30.056 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9f4db561-6c04-4684-af98-bf42a457139b 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=9f4db561-6c04-4684-af98-bf42a457139b 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:30.056 11:29:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f4db561-6c04-4684-af98-bf42a457139b 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:30.315 { 00:17:30.315 "name": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:30.315 "aliases": [ 00:17:30.315 "lvs/nvme0n1p0" 00:17:30.315 ], 00:17:30.315 "product_name": "Logical Volume", 00:17:30.315 "block_size": 4096, 00:17:30.315 "num_blocks": 26476544, 00:17:30.315 "uuid": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:30.315 "assigned_rate_limits": { 00:17:30.315 "rw_ios_per_sec": 0, 00:17:30.315 "rw_mbytes_per_sec": 0, 00:17:30.315 "r_mbytes_per_sec": 0, 00:17:30.315 "w_mbytes_per_sec": 0 00:17:30.315 }, 00:17:30.315 "claimed": false, 00:17:30.315 "zoned": false, 00:17:30.315 "supported_io_types": { 00:17:30.315 "read": true, 00:17:30.315 "write": true, 00:17:30.315 "unmap": true, 00:17:30.315 "flush": false, 00:17:30.315 "reset": true, 00:17:30.315 "nvme_admin": false, 00:17:30.315 "nvme_io": false, 00:17:30.315 "nvme_io_md": false, 00:17:30.315 "write_zeroes": true, 00:17:30.315 "zcopy": false, 00:17:30.315 "get_zone_info": false, 00:17:30.315 "zone_management": false, 00:17:30.315 "zone_append": false, 00:17:30.315 "compare": false, 00:17:30.315 "compare_and_write": false, 00:17:30.315 "abort": false, 00:17:30.315 "seek_hole": true, 00:17:30.315 "seek_data": true, 00:17:30.315 "copy": false, 00:17:30.315 "nvme_iov_md": false 00:17:30.315 }, 00:17:30.315 "driver_specific": { 00:17:30.315 "lvol": { 00:17:30.315 "lvol_store_uuid": "d5531fe5-8c65-494e-95db-f412f7aed588", 00:17:30.315 "base_bdev": "nvme0n1", 00:17:30.315 "thin_provision": true, 00:17:30.315 "num_allocated_clusters": 0, 00:17:30.315 "snapshot": false, 00:17:30.315 "clone": false, 00:17:30.315 "esnap_clone": false 00:17:30.315 } 00:17:30.315 } 00:17:30.315 } 00:17:30.315 ]' 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:30.315 11:29:13 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9f4db561-6c04-4684-af98-bf42a457139b -c nvc0n1p0 --l2p_dram_limit 60 00:17:30.574 [2024-11-15 11:29:13.436792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.574 [2024-11-15 11:29:13.436858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.574 [2024-11-15 11:29:13.436884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:30.574 [2024-11-15 11:29:13.436898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.574 [2024-11-15 11:29:13.437007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.574 [2024-11-15 11:29:13.437062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.574 [2024-11-15 11:29:13.437084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:30.574 [2024-11-15 11:29:13.437097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.574 [2024-11-15 11:29:13.437143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.574 [2024-11-15 11:29:13.438163] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.574 [2024-11-15 11:29:13.438203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.574 [2024-11-15 11:29:13.438218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.574 [2024-11-15 11:29:13.438235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:17:30.574 [2024-11-15 11:29:13.438248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.574 [2024-11-15 11:29:13.438424] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6f411de3-e622-4e57-af55-5366fba31352 00:17:30.574 [2024-11-15 11:29:13.440300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.574 [2024-11-15 11:29:13.440352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:30.574 [2024-11-15 11:29:13.440370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:30.574 [2024-11-15 11:29:13.440385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.574 [2024-11-15 11:29:13.449915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.574 [2024-11-15 11:29:13.449990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.575 [2024-11-15 11:29:13.450010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.416 ms 00:17:30.575 [2024-11-15 11:29:13.450025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.450219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.450245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.575 [2024-11-15 11:29:13.450260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:30.575 [2024-11-15 11:29:13.450280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.450439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.450465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.575 [2024-11-15 11:29:13.450479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:30.575 [2024-11-15 11:29:13.450494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.450552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.575 [2024-11-15 11:29:13.455760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.455942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.575 [2024-11-15 11:29:13.455976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:17:30.575 [2024-11-15 11:29:13.455994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.456097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.456121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.575 [2024-11-15 11:29:13.456139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:30.575 [2024-11-15 11:29:13.456151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.456211] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:30.575 [2024-11-15 11:29:13.456416] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.575 [2024-11-15 11:29:13.456450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.575 [2024-11-15 11:29:13.456468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:30.575 [2024-11-15 11:29:13.456487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.575 [2024-11-15 11:29:13.456503] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.575 [2024-11-15 11:29:13.456519] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:30.575 [2024-11-15 11:29:13.456532] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.575 [2024-11-15 11:29:13.456546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.575 [2024-11-15 11:29:13.456559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.575 [2024-11-15 11:29:13.456575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.456594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.575 [2024-11-15 11:29:13.456610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:17:30.575 [2024-11-15 11:29:13.456623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.456749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.575 [2024-11-15 11:29:13.456772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.575 [2024-11-15 11:29:13.456789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:30.575 [2024-11-15 11:29:13.456801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.575 [2024-11-15 11:29:13.456934] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.575 [2024-11-15 11:29:13.456952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.575 [2024-11-15 11:29:13.456972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.575 [2024-11-15 11:29:13.456985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.575 [2024-11-15 11:29:13.457011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.575 [2024-11-15 11:29:13.457088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.575 [2024-11-15 11:29:13.457113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.575 [2024-11-15 11:29:13.457132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:30.575 [2024-11-15 11:29:13.457151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.575 [2024-11-15 11:29:13.457163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.575 [2024-11-15 11:29:13.457178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:30.575 [2024-11-15 11:29:13.457189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.575 [2024-11-15 11:29:13.457220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.575 [2024-11-15 11:29:13.457266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.575 [2024-11-15 11:29:13.457308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.575 [2024-11-15 11:29:13.457355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.575 [2024-11-15 11:29:13.457397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.575 [2024-11-15 11:29:13.457448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.575 [2024-11-15 11:29:13.457484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.575 [2024-11-15 11:29:13.457521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:30.575 [2024-11-15 11:29:13.457537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.575 [2024-11-15 11:29:13.457549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.575 [2024-11-15 11:29:13.457563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:30.575 [2024-11-15 11:29:13.457574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.575 [2024-11-15 11:29:13.457602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:30.575 [2024-11-15 11:29:13.457616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457631] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.575 [2024-11-15 11:29:13.457647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.575 [2024-11-15 11:29:13.457659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.575 [2024-11-15 11:29:13.457686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.575 [2024-11-15 11:29:13.457704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.575 [2024-11-15 11:29:13.457716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.575 [2024-11-15 11:29:13.457730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.575 [2024-11-15 11:29:13.457741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.575 [2024-11-15 11:29:13.457756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.575 [2024-11-15 11:29:13.457778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.575 [2024-11-15 11:29:13.457797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.575 [2024-11-15 11:29:13.457810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:30.575 [2024-11-15 11:29:13.457825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:30.575 [2024-11-15 11:29:13.457838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:30.576 [2024-11-15 11:29:13.457852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:30.576 [2024-11-15 11:29:13.457864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:30.576 [2024-11-15 11:29:13.457878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:30.576 [2024-11-15 11:29:13.457890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:30.576 [2024-11-15 11:29:13.457905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:30.576 [2024-11-15 11:29:13.457917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:30.576 [2024-11-15 11:29:13.457935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.457948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.457963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.457976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.457991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:30.576 [2024-11-15 11:29:13.458003] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.576 [2024-11-15 11:29:13.458019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.458049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.576 [2024-11-15 11:29:13.458066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.576 [2024-11-15 11:29:13.458079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.576 [2024-11-15 11:29:13.458094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.576 [2024-11-15 11:29:13.458113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.576 [2024-11-15 11:29:13.458129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.576 [2024-11-15 11:29:13.458143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:17:30.576 [2024-11-15 11:29:13.458161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.576 [2024-11-15 11:29:13.458245] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:30.576 [2024-11-15 11:29:13.458269] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:33.861 [2024-11-15 11:29:16.565189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.565508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:33.861 [2024-11-15 11:29:16.565650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3106.955 ms 00:17:33.861 [2024-11-15 11:29:16.565785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.605254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.605625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.861 [2024-11-15 11:29:16.605766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.117 ms 00:17:33.861 [2024-11-15 11:29:16.605896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.606146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.606312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:33.861 [2024-11-15 11:29:16.606446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:33.861 [2024-11-15 11:29:16.606506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.663530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.663823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:33.861 [2024-11-15 11:29:16.663860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.861 ms 00:17:33.861 [2024-11-15 11:29:16.663880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.663956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.663978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:33.861 [2024-11-15 11:29:16.664000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:33.861 [2024-11-15 11:29:16.664015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.664727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.664765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:33.861 [2024-11-15 11:29:16.664781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:17:33.861 [2024-11-15 11:29:16.664799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.664990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.665021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:33.861 [2024-11-15 11:29:16.665051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:33.861 [2024-11-15 11:29:16.665082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.686948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.687022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:33.861 [2024-11-15 11:29:16.687067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.825 ms 00:17:33.861 [2024-11-15 11:29:16.687085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.701874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:33.861 [2024-11-15 11:29:16.723339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.723430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:33.861 [2024-11-15 11:29:16.723457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.071 ms 00:17:33.861 [2024-11-15 11:29:16.723484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.861 [2024-11-15 11:29:16.784489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.861 [2024-11-15 11:29:16.784568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:33.861 [2024-11-15 11:29:16.784599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.928 ms 00:17:33.861 [2024-11-15 11:29:16.784613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.862 [2024-11-15 11:29:16.784903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:33.862 [2024-11-15 11:29:16.784928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:33.862 [2024-11-15 11:29:16.784950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:33.862 [2024-11-15 11:29:16.784972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.816013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.816097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:34.120 [2024-11-15 11:29:16.816123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.903 ms 00:17:34.120 [2024-11-15 11:29:16.816136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.846627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.846698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:34.120 [2024-11-15 11:29:16.846724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.397 ms 00:17:34.120 [2024-11-15 11:29:16.846737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.847679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.847717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.120 [2024-11-15 11:29:16.847738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:17:34.120 [2024-11-15 11:29:16.847751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.932882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.933178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:34.120 [2024-11-15 11:29:16.933223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.029 ms 00:17:34.120 [2024-11-15 11:29:16.933243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.967016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.967089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:34.120 [2024-11-15 11:29:16.967115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.604 ms 00:17:34.120 [2024-11-15 11:29:16.967129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:16.999840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:16.999903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:34.120 [2024-11-15 11:29:16.999928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.629 ms 00:17:34.120 [2024-11-15 11:29:16.999941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:17.031904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:17.032114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.120 [2024-11-15 11:29:17.032153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.888 ms 00:17:34.120 [2024-11-15 11:29:17.032168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:17.032276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:17.032297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.120 [2024-11-15 11:29:17.032323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:34.120 [2024-11-15 11:29:17.032337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:17.032528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.120 [2024-11-15 11:29:17.032553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.120 [2024-11-15 11:29:17.032571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:34.120 [2024-11-15 11:29:17.032584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.120 [2024-11-15 11:29:17.034071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3596.691 ms, result 0 00:17:34.120 { 00:17:34.120 "name": "ftl0", 00:17:34.120 "uuid": "6f411de3-e622-4e57-af55-5366fba31352" 00:17:34.120 } 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:34.120 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:34.684 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:34.684 [ 00:17:34.684 { 00:17:34.684 "name": "ftl0", 00:17:34.684 "aliases": [ 00:17:34.684 "6f411de3-e622-4e57-af55-5366fba31352" 00:17:34.684 ], 00:17:34.684 "product_name": "FTL disk", 00:17:34.684 "block_size": 4096, 00:17:34.684 "num_blocks": 20971520, 00:17:34.684 "uuid": "6f411de3-e622-4e57-af55-5366fba31352", 00:17:34.684 "assigned_rate_limits": { 00:17:34.684 "rw_ios_per_sec": 0, 00:17:34.684 "rw_mbytes_per_sec": 0, 00:17:34.684 "r_mbytes_per_sec": 0, 00:17:34.684 "w_mbytes_per_sec": 0 00:17:34.684 }, 00:17:34.684 "claimed": false, 00:17:34.684 "zoned": false, 00:17:34.684 "supported_io_types": { 00:17:34.684 "read": true, 00:17:34.684 "write": true, 00:17:34.684 "unmap": true, 00:17:34.684 "flush": true, 00:17:34.684 "reset": false, 00:17:34.684 "nvme_admin": false, 00:17:34.684 "nvme_io": false, 00:17:34.684 "nvme_io_md": false, 00:17:34.684 "write_zeroes": true, 00:17:34.684 "zcopy": false, 00:17:34.684 "get_zone_info": false, 00:17:34.684 "zone_management": false, 00:17:34.684 "zone_append": false, 00:17:34.684 "compare": false, 00:17:34.684 "compare_and_write": false, 00:17:34.684 "abort": false, 00:17:34.684 "seek_hole": false, 00:17:34.684 "seek_data": false, 00:17:34.684 "copy": false, 00:17:34.684 "nvme_iov_md": false 00:17:34.684 }, 00:17:34.685 "driver_specific": { 00:17:34.685 "ftl": { 00:17:34.685 "base_bdev": "9f4db561-6c04-4684-af98-bf42a457139b", 00:17:34.685 "cache": "nvc0n1p0" 00:17:34.685 } 00:17:34.685 } 00:17:34.685 } 00:17:34.685 ] 00:17:34.685 11:29:17 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:17:34.685 11:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:34.685 11:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:34.996 11:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:34.996 11:29:17 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:35.254 [2024-11-15 11:29:18.066819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.066889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.254 [2024-11-15 11:29:18.066913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:35.254 [2024-11-15 11:29:18.066930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.066980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:35.254 [2024-11-15 11:29:18.070704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.070900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.254 [2024-11-15 11:29:18.070934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.692 ms 00:17:35.254 [2024-11-15 11:29:18.070948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.071461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.071491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.254 [2024-11-15 11:29:18.071518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:17:35.254 [2024-11-15 11:29:18.071531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.074737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.074773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.254 [2024-11-15 11:29:18.074792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.171 ms 00:17:35.254 [2024-11-15 11:29:18.074805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.081377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.081410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:35.254 [2024-11-15 11:29:18.081429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.531 ms 00:17:35.254 [2024-11-15 11:29:18.081441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.113448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.113513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.254 [2024-11-15 11:29:18.113537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.855 ms 00:17:35.254 [2024-11-15 11:29:18.113551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.132643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.132691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.254 [2024-11-15 11:29:18.132718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.991 ms 00:17:35.254 [2024-11-15 11:29:18.132731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.132973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.132994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.254 [2024-11-15 11:29:18.133011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:17:35.254 [2024-11-15 11:29:18.133023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.163793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.163844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:35.254 [2024-11-15 11:29:18.163867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.685 ms 00:17:35.254 [2024-11-15 11:29:18.163881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.254 [2024-11-15 11:29:18.194770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.254 [2024-11-15 11:29:18.194819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:35.254 [2024-11-15 11:29:18.194856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.824 ms 00:17:35.254 [2024-11-15 11:29:18.194875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.514 [2024-11-15 11:29:18.225602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.514 [2024-11-15 11:29:18.225806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.514 [2024-11-15 11:29:18.225858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.647 ms 00:17:35.514 [2024-11-15 11:29:18.225872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.514 [2024-11-15 11:29:18.255943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.514 [2024-11-15 11:29:18.255982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.514 [2024-11-15 11:29:18.256002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.915 ms 00:17:35.514 [2024-11-15 11:29:18.256013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.514 [2024-11-15 11:29:18.256101] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.514 [2024-11-15 11:29:18.256158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.256976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.514 [2024-11-15 11:29:18.257768] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.514 [2024-11-15 11:29:18.257783] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6f411de3-e622-4e57-af55-5366fba31352 00:17:35.514 [2024-11-15 11:29:18.257796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.514 [2024-11-15 11:29:18.257812] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.514 [2024-11-15 11:29:18.257824] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.514 [2024-11-15 11:29:18.257842] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.514 [2024-11-15 11:29:18.257854] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.514 [2024-11-15 11:29:18.257868] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.514 [2024-11-15 11:29:18.257881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.514 [2024-11-15 11:29:18.257894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.514 [2024-11-15 11:29:18.257905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.514 [2024-11-15 11:29:18.257920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.514 [2024-11-15 11:29:18.257932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.514 [2024-11-15 11:29:18.257948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:17:35.514 [2024-11-15 11:29:18.257960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.514 [2024-11-15 11:29:18.274586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.514 [2024-11-15 11:29:18.274629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.515 [2024-11-15 11:29:18.274664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.546 ms 00:17:35.515 [2024-11-15 11:29:18.274676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.275125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.515 [2024-11-15 11:29:18.275179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.515 [2024-11-15 11:29:18.275212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:35.515 [2024-11-15 11:29:18.275225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.331580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.515 [2024-11-15 11:29:18.331657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.515 [2024-11-15 11:29:18.331704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.515 [2024-11-15 11:29:18.331717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.331807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.515 [2024-11-15 11:29:18.331823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.515 [2024-11-15 11:29:18.331839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.515 [2024-11-15 11:29:18.331851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.332012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.515 [2024-11-15 11:29:18.332056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.515 [2024-11-15 11:29:18.332092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.515 [2024-11-15 11:29:18.332104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.332154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.515 [2024-11-15 11:29:18.332184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.515 [2024-11-15 11:29:18.332200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.515 [2024-11-15 11:29:18.332212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.515 [2024-11-15 11:29:18.444679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.515 [2024-11-15 11:29:18.445008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.515 [2024-11-15 11:29:18.445069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.515 [2024-11-15 11:29:18.445086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.531444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.531518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.772 [2024-11-15 11:29:18.531541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.531554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.531709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.531728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.772 [2024-11-15 11:29:18.531748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.531760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.531854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.531871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.772 [2024-11-15 11:29:18.531885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.531897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.532096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.532117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.772 [2024-11-15 11:29:18.532150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.532165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.532260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.532280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.772 [2024-11-15 11:29:18.532296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.532309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.532369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.532385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.772 [2024-11-15 11:29:18.532401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.532413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.532488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.772 [2024-11-15 11:29:18.532505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.772 [2024-11-15 11:29:18.532521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.772 [2024-11-15 11:29:18.532533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.772 [2024-11-15 11:29:18.532747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 465.882 ms, result 0 00:17:35.772 true 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74158 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 74158 ']' 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 74158 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74158 00:17:35.772 killing process with pid 74158 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74158' 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 74158 00:17:35.772 11:29:18 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 74158 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:41.035 11:29:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:41.035 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:41.035 fio-3.35 00:17:41.035 Starting 1 thread 00:17:46.306 00:17:46.306 test: (groupid=0, jobs=1): err= 0: pid=74381: Fri Nov 15 11:29:29 2024 00:17:46.306 read: IOPS=891, BW=59.2MiB/s (62.1MB/s)(255MiB/4298msec) 00:17:46.306 slat (nsec): min=5583, max=77338, avg=7917.06, stdev=4385.56 00:17:46.306 clat (usec): min=339, max=744, avg=497.49, stdev=49.68 00:17:46.306 lat (usec): min=347, max=760, avg=505.41, stdev=50.38 00:17:46.306 clat percentiles (usec): 00:17:46.306 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[ 449], 20.00th=[ 461], 00:17:46.306 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:17:46.306 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 594], 00:17:46.306 | 99.00th=[ 644], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 725], 00:17:46.306 | 99.99th=[ 742] 00:17:46.306 write: IOPS=898, BW=59.6MiB/s (62.5MB/s)(256MiB/4293msec); 0 zone resets 00:17:46.306 slat (usec): min=17, max=120, avg=25.14, stdev= 7.71 00:17:46.306 clat (usec): min=409, max=3574, avg=572.44, stdev=81.02 00:17:46.306 lat (usec): min=432, max=3596, avg=597.58, stdev=81.20 00:17:46.306 clat percentiles (usec): 00:17:46.306 | 1.00th=[ 457], 5.00th=[ 482], 10.00th=[ 498], 20.00th=[ 529], 00:17:46.306 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 578], 00:17:46.306 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:17:46.306 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 1123], 00:17:46.306 | 99.99th=[ 3589] 00:17:46.306 bw ( KiB/s): min=59432, max=63240, per=100.00%, avg=61183.00, stdev=1057.05, samples=8 00:17:46.306 iops : min= 874, max= 930, avg=899.75, stdev=15.54, samples=8 00:17:46.306 lat (usec) : 500=35.97%, 750=63.09%, 1000=0.91% 00:17:46.306 lat (msec) : 2=0.01%, 4=0.01% 00:17:46.306 cpu : usr=98.77%, sys=0.16%, ctx=8, majf=0, minf=1169 00:17:46.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.306 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:46.306 00:17:46.306 Run status group 0 (all jobs): 00:17:46.306 READ: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=255MiB (267MB), run=4298-4298msec 00:17:46.306 WRITE: bw=59.6MiB/s (62.5MB/s), 59.6MiB/s-59.6MiB/s (62.5MB/s-62.5MB/s), io=256MiB (269MB), run=4293-4293msec 00:17:48.211 ----------------------------------------------------- 00:17:48.211 Suppressions used: 00:17:48.211 count bytes template 00:17:48.211 1 5 /usr/src/fio/parse.c 00:17:48.211 1 8 libtcmalloc_minimal.so 00:17:48.211 1 904 libcrypto.so 00:17:48.211 ----------------------------------------------------- 00:17:48.211 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.211 11:29:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:48.470 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:48.471 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:48.471 fio-3.35 00:17:48.471 Starting 2 threads 00:18:20.547 00:18:20.547 first_half: (groupid=0, jobs=1): err= 0: pid=74485: Fri Nov 15 11:30:01 2024 00:18:20.547 read: IOPS=2262, BW=9048KiB/s (9265kB/s)(256MiB/28945msec) 00:18:20.547 slat (nsec): min=4329, max=64343, avg=7733.53, stdev=3360.47 00:18:20.547 clat (usec): min=787, max=375641, avg=48245.28, stdev=28536.18 00:18:20.547 lat (usec): min=793, max=375650, avg=48253.01, stdev=28536.43 00:18:20.547 clat percentiles (msec): 00:18:20.547 | 1.00th=[ 13], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:18:20.547 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:18:20.547 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 51], 95.00th=[ 92], 00:18:20.547 | 99.00th=[ 197], 99.50th=[ 209], 99.90th=[ 284], 99.95th=[ 334], 00:18:20.547 | 99.99th=[ 368] 00:18:20.547 write: IOPS=2268, BW=9073KiB/s (9290kB/s)(256MiB/28894msec); 0 zone resets 00:18:20.547 slat (usec): min=5, max=395, avg= 9.12, stdev= 6.04 00:18:20.547 clat (usec): min=456, max=59330, avg=8297.35, stdev=8352.64 00:18:20.547 lat (usec): min=469, max=59338, avg=8306.48, stdev=8352.80 00:18:20.547 clat percentiles (usec): 00:18:20.547 | 1.00th=[ 1139], 5.00th=[ 1582], 10.00th=[ 1942], 20.00th=[ 3392], 00:18:20.547 | 30.00th=[ 4359], 40.00th=[ 5604], 50.00th=[ 6194], 60.00th=[ 7177], 00:18:20.547 | 70.00th=[ 7767], 80.00th=[ 9372], 90.00th=[16319], 95.00th=[24249], 00:18:20.547 | 99.00th=[44827], 99.50th=[46400], 99.90th=[55313], 99.95th=[57410], 00:18:20.547 | 99.99th=[58983] 00:18:20.547 bw ( KiB/s): min= 3800, max=41576, per=100.00%, avg=24790.10, stdev=10593.70, samples=21 00:18:20.547 iops : min= 950, max=10394, avg=6197.52, stdev=2648.43, samples=21 00:18:20.547 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.22% 00:18:20.547 lat (msec) : 2=5.17%, 4=7.57%, 10=28.11%, 20=7.49%, 50=46.33% 00:18:20.547 lat (msec) : 100=2.78%, 250=2.25%, 500=0.06% 00:18:20.547 cpu : usr=98.79%, sys=0.33%, ctx=238, majf=0, minf=5542 00:18:20.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:20.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.547 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.547 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.547 second_half: (groupid=0, jobs=1): err= 0: pid=74486: Fri Nov 15 11:30:01 2024 00:18:20.547 read: IOPS=2282, BW=9130KiB/s (9349kB/s)(256MiB/28692msec) 00:18:20.547 slat (nsec): min=4308, max=75766, avg=7925.91, stdev=3592.08 00:18:20.547 clat (msec): min=11, max=227, avg=48.58, stdev=24.62 00:18:20.547 lat (msec): min=11, max=227, avg=48.59, stdev=24.62 00:18:20.547 clat percentiles (msec): 00:18:20.547 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:18:20.547 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:18:20.547 | 70.00th=[ 45], 80.00th=[ 47], 90.00th=[ 53], 95.00th=[ 86], 00:18:20.547 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 213], 99.95th=[ 218], 00:18:20.547 | 99.99th=[ 224] 00:18:20.547 write: IOPS=2297, BW=9188KiB/s (9409kB/s)(256MiB/28531msec); 0 zone resets 00:18:20.547 slat (usec): min=5, max=483, avg= 9.12, stdev= 7.28 00:18:20.547 clat (usec): min=512, max=46299, avg=7464.01, stdev=4772.92 00:18:20.547 lat (usec): min=523, max=46306, avg=7473.13, stdev=4773.13 00:18:20.547 clat percentiles (usec): 00:18:20.547 | 1.00th=[ 1401], 5.00th=[ 2180], 10.00th=[ 3130], 20.00th=[ 4113], 00:18:20.547 | 30.00th=[ 5211], 40.00th=[ 5866], 50.00th=[ 6390], 60.00th=[ 7177], 00:18:20.547 | 70.00th=[ 7570], 80.00th=[ 8979], 90.00th=[14615], 95.00th=[16712], 00:18:20.547 | 99.00th=[25035], 99.50th=[32375], 99.90th=[41681], 99.95th=[43779], 00:18:20.547 | 99.99th=[44827] 00:18:20.547 bw ( KiB/s): min= 1880, max=39408, per=100.00%, avg=21839.33, stdev=12249.61, samples=24 00:18:20.547 iops : min= 470, max= 9852, avg=5459.83, stdev=3062.40, samples=24 00:18:20.547 lat (usec) : 750=0.04%, 1000=0.11% 00:18:20.547 lat (msec) : 2=1.79%, 4=7.19%, 10=32.52%, 20=7.61%, 50=44.94% 00:18:20.547 lat (msec) : 100=3.68%, 250=2.13% 00:18:20.547 cpu : usr=98.89%, sys=0.36%, ctx=58, majf=0, minf=5573 00:18:20.547 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:20.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.547 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.547 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.547 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.547 00:18:20.547 Run status group 0 (all jobs): 00:18:20.547 READ: bw=17.7MiB/s (18.5MB/s), 9048KiB/s-9130KiB/s (9265kB/s-9349kB/s), io=512MiB (536MB), run=28692-28945msec 00:18:20.547 WRITE: bw=17.7MiB/s (18.6MB/s), 9073KiB/s-9188KiB/s (9290kB/s-9409kB/s), io=512MiB (537MB), run=28531-28894msec 00:18:20.806 ----------------------------------------------------- 00:18:20.806 Suppressions used: 00:18:20.806 count bytes template 00:18:20.806 2 10 /usr/src/fio/parse.c 00:18:20.806 4 384 /usr/src/fio/iolog.c 00:18:20.806 1 8 libtcmalloc_minimal.so 00:18:20.806 1 904 libcrypto.so 00:18:20.806 ----------------------------------------------------- 00:18:20.806 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:20.806 11:30:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:21.065 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:21.065 fio-3.35 00:18:21.065 Starting 1 thread 00:18:39.161 00:18:39.161 test: (groupid=0, jobs=1): err= 0: pid=74844: Fri Nov 15 11:30:21 2024 00:18:39.161 read: IOPS=5929, BW=23.2MiB/s (24.3MB/s)(255MiB/10996msec) 00:18:39.161 slat (nsec): min=4172, max=78686, avg=7193.43, stdev=3805.84 00:18:39.161 clat (usec): min=991, max=42332, avg=21575.24, stdev=1089.72 00:18:39.161 lat (usec): min=1011, max=42340, avg=21582.44, stdev=1089.78 00:18:39.161 clat percentiles (usec): 00:18:39.161 | 1.00th=[19792], 5.00th=[20317], 10.00th=[20579], 20.00th=[21103], 00:18:39.161 | 30.00th=[21365], 40.00th=[21365], 50.00th=[21627], 60.00th=[21627], 00:18:39.161 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22152], 95.00th=[22414], 00:18:39.161 | 99.00th=[25822], 99.50th=[26346], 99.90th=[31589], 99.95th=[36963], 00:18:39.161 | 99.99th=[41681] 00:18:39.161 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(256MiB/5640msec); 0 zone resets 00:18:39.161 slat (usec): min=4, max=570, avg= 9.73, stdev= 7.94 00:18:39.161 clat (usec): min=682, max=65203, avg=10955.09, stdev=13806.35 00:18:39.161 lat (usec): min=693, max=65211, avg=10964.82, stdev=13806.38 00:18:39.161 clat percentiles (usec): 00:18:39.161 | 1.00th=[ 979], 5.00th=[ 1188], 10.00th=[ 1319], 20.00th=[ 1500], 00:18:39.161 | 30.00th=[ 1680], 40.00th=[ 2147], 50.00th=[ 7177], 60.00th=[ 8029], 00:18:39.161 | 70.00th=[ 9372], 80.00th=[11469], 90.00th=[40633], 95.00th=[42730], 00:18:39.161 | 99.00th=[46400], 99.50th=[47973], 99.90th=[51643], 99.95th=[53216], 00:18:39.161 | 99.99th=[61604] 00:18:39.161 bw ( KiB/s): min=10096, max=63640, per=94.00%, avg=43690.67, stdev=13992.11, samples=12 00:18:39.161 iops : min= 2524, max=15910, avg=10922.67, stdev=3498.03, samples=12 00:18:39.161 lat (usec) : 750=0.02%, 1000=0.59% 00:18:39.161 lat (msec) : 2=18.78%, 4=1.56%, 10=16.63%, 20=5.40%, 50=56.92% 00:18:39.161 lat (msec) : 100=0.11% 00:18:39.161 cpu : usr=97.91%, sys=1.00%, ctx=111, majf=0, minf=5565 00:18:39.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:39.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.161 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:39.161 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:39.161 00:18:39.161 Run status group 0 (all jobs): 00:18:39.161 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=255MiB (267MB), run=10996-10996msec 00:18:39.161 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=256MiB (268MB), run=5640-5640msec 00:18:40.540 ----------------------------------------------------- 00:18:40.540 Suppressions used: 00:18:40.540 count bytes template 00:18:40.540 1 5 /usr/src/fio/parse.c 00:18:40.540 2 192 /usr/src/fio/iolog.c 00:18:40.540 1 8 libtcmalloc_minimal.so 00:18:40.540 1 904 libcrypto.so 00:18:40.540 ----------------------------------------------------- 00:18:40.540 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.540 Remove shared memory files 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57933 /dev/shm/spdk_tgt_trace.pid73060 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:40.540 ************************************ 00:18:40.540 END TEST ftl_fio_basic 00:18:40.540 ************************************ 00:18:40.540 00:18:40.540 real 1m15.140s 00:18:40.540 user 2m47.240s 00:18:40.540 sys 0m4.290s 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:40.540 11:30:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:40.540 11:30:23 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:40.540 11:30:23 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:40.540 11:30:23 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:40.540 11:30:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:40.540 ************************************ 00:18:40.540 START TEST ftl_bdevperf 00:18:40.540 ************************************ 00:18:40.540 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:40.799 * Looking for test storage... 00:18:40.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.799 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:40.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.800 --rc genhtml_branch_coverage=1 00:18:40.800 --rc genhtml_function_coverage=1 00:18:40.800 --rc genhtml_legend=1 00:18:40.800 --rc geninfo_all_blocks=1 00:18:40.800 --rc geninfo_unexecuted_blocks=1 00:18:40.800 00:18:40.800 ' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:40.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.800 --rc genhtml_branch_coverage=1 00:18:40.800 --rc genhtml_function_coverage=1 00:18:40.800 --rc genhtml_legend=1 00:18:40.800 --rc geninfo_all_blocks=1 00:18:40.800 --rc geninfo_unexecuted_blocks=1 00:18:40.800 00:18:40.800 ' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:40.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.800 --rc genhtml_branch_coverage=1 00:18:40.800 --rc genhtml_function_coverage=1 00:18:40.800 --rc genhtml_legend=1 00:18:40.800 --rc geninfo_all_blocks=1 00:18:40.800 --rc geninfo_unexecuted_blocks=1 00:18:40.800 00:18:40.800 ' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:40.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.800 --rc genhtml_branch_coverage=1 00:18:40.800 --rc genhtml_function_coverage=1 00:18:40.800 --rc genhtml_legend=1 00:18:40.800 --rc geninfo_all_blocks=1 00:18:40.800 --rc geninfo_unexecuted_blocks=1 00:18:40.800 00:18:40.800 ' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75110 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75110 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 75110 ']' 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.800 11:30:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:41.059 [2024-11-15 11:30:23.801487] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:18:41.059 [2024-11-15 11:30:23.802202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75110 ] 00:18:41.059 [2024-11-15 11:30:23.991757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.318 [2024-11-15 11:30:24.118573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:41.886 11:30:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:42.453 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:42.453 { 00:18:42.453 "name": "nvme0n1", 00:18:42.453 "aliases": [ 00:18:42.453 "44c2c440-b03e-4831-afcc-b17c0fdc6ef1" 00:18:42.453 ], 00:18:42.453 "product_name": "NVMe disk", 00:18:42.453 "block_size": 4096, 00:18:42.453 "num_blocks": 1310720, 00:18:42.453 "uuid": "44c2c440-b03e-4831-afcc-b17c0fdc6ef1", 00:18:42.453 "numa_id": -1, 00:18:42.453 "assigned_rate_limits": { 00:18:42.453 "rw_ios_per_sec": 0, 00:18:42.453 "rw_mbytes_per_sec": 0, 00:18:42.453 "r_mbytes_per_sec": 0, 00:18:42.453 "w_mbytes_per_sec": 0 00:18:42.453 }, 00:18:42.453 "claimed": true, 00:18:42.453 "claim_type": "read_many_write_one", 00:18:42.453 "zoned": false, 00:18:42.453 "supported_io_types": { 00:18:42.453 "read": true, 00:18:42.453 "write": true, 00:18:42.453 "unmap": true, 00:18:42.453 "flush": true, 00:18:42.453 "reset": true, 00:18:42.453 "nvme_admin": true, 00:18:42.453 "nvme_io": true, 00:18:42.453 "nvme_io_md": false, 00:18:42.453 "write_zeroes": true, 00:18:42.453 "zcopy": false, 00:18:42.453 "get_zone_info": false, 00:18:42.453 "zone_management": false, 00:18:42.453 "zone_append": false, 00:18:42.453 "compare": true, 00:18:42.453 "compare_and_write": false, 00:18:42.453 "abort": true, 00:18:42.453 "seek_hole": false, 00:18:42.453 "seek_data": false, 00:18:42.453 "copy": true, 00:18:42.453 "nvme_iov_md": false 00:18:42.453 }, 00:18:42.454 "driver_specific": { 00:18:42.454 "nvme": [ 00:18:42.454 { 00:18:42.454 "pci_address": "0000:00:11.0", 00:18:42.454 "trid": { 00:18:42.454 "trtype": "PCIe", 00:18:42.454 "traddr": "0000:00:11.0" 00:18:42.454 }, 00:18:42.454 "ctrlr_data": { 00:18:42.454 "cntlid": 0, 00:18:42.454 "vendor_id": "0x1b36", 00:18:42.454 "model_number": "QEMU NVMe Ctrl", 00:18:42.454 "serial_number": "12341", 00:18:42.454 "firmware_revision": "8.0.0", 00:18:42.454 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:42.454 "oacs": { 00:18:42.454 "security": 0, 00:18:42.454 "format": 1, 00:18:42.454 "firmware": 0, 00:18:42.454 "ns_manage": 1 00:18:42.454 }, 00:18:42.454 "multi_ctrlr": false, 00:18:42.454 "ana_reporting": false 00:18:42.454 }, 00:18:42.454 "vs": { 00:18:42.454 "nvme_version": "1.4" 00:18:42.454 }, 00:18:42.454 "ns_data": { 00:18:42.454 "id": 1, 00:18:42.454 "can_share": false 00:18:42.454 } 00:18:42.454 } 00:18:42.454 ], 00:18:42.454 "mp_policy": "active_passive" 00:18:42.454 } 00:18:42.454 } 00:18:42.454 ]' 00:18:42.454 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:42.454 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:42.454 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:42.713 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:42.972 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d5531fe5-8c65-494e-95db-f412f7aed588 00:18:42.972 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:42.972 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5531fe5-8c65-494e-95db-f412f7aed588 00:18:43.230 11:30:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:43.490 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=010722c6-b48e-4964-82b5-321c8e3b1022 00:18:43.490 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 010722c6-b48e-4964-82b5-321c8e3b1022 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:43.749 { 00:18:43.749 "name": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:43.749 "aliases": [ 00:18:43.749 "lvs/nvme0n1p0" 00:18:43.749 ], 00:18:43.749 "product_name": "Logical Volume", 00:18:43.749 "block_size": 4096, 00:18:43.749 "num_blocks": 26476544, 00:18:43.749 "uuid": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:43.749 "assigned_rate_limits": { 00:18:43.749 "rw_ios_per_sec": 0, 00:18:43.749 "rw_mbytes_per_sec": 0, 00:18:43.749 "r_mbytes_per_sec": 0, 00:18:43.749 "w_mbytes_per_sec": 0 00:18:43.749 }, 00:18:43.749 "claimed": false, 00:18:43.749 "zoned": false, 00:18:43.749 "supported_io_types": { 00:18:43.749 "read": true, 00:18:43.749 "write": true, 00:18:43.749 "unmap": true, 00:18:43.749 "flush": false, 00:18:43.749 "reset": true, 00:18:43.749 "nvme_admin": false, 00:18:43.749 "nvme_io": false, 00:18:43.749 "nvme_io_md": false, 00:18:43.749 "write_zeroes": true, 00:18:43.749 "zcopy": false, 00:18:43.749 "get_zone_info": false, 00:18:43.749 "zone_management": false, 00:18:43.749 "zone_append": false, 00:18:43.749 "compare": false, 00:18:43.749 "compare_and_write": false, 00:18:43.749 "abort": false, 00:18:43.749 "seek_hole": true, 00:18:43.749 "seek_data": true, 00:18:43.749 "copy": false, 00:18:43.749 "nvme_iov_md": false 00:18:43.749 }, 00:18:43.749 "driver_specific": { 00:18:43.749 "lvol": { 00:18:43.749 "lvol_store_uuid": "010722c6-b48e-4964-82b5-321c8e3b1022", 00:18:43.749 "base_bdev": "nvme0n1", 00:18:43.749 "thin_provision": true, 00:18:43.749 "num_allocated_clusters": 0, 00:18:43.749 "snapshot": false, 00:18:43.749 "clone": false, 00:18:43.749 "esnap_clone": false 00:18:43.749 } 00:18:43.749 } 00:18:43.749 } 00:18:43.749 ]' 00:18:43.749 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:44.008 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:44.009 11:30:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:44.267 11:30:27 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:44.267 11:30:27 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:44.267 11:30:27 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:44.268 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:44.268 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:44.268 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:44.268 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:44.268 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:44.527 { 00:18:44.527 "name": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:44.527 "aliases": [ 00:18:44.527 "lvs/nvme0n1p0" 00:18:44.527 ], 00:18:44.527 "product_name": "Logical Volume", 00:18:44.527 "block_size": 4096, 00:18:44.527 "num_blocks": 26476544, 00:18:44.527 "uuid": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:44.527 "assigned_rate_limits": { 00:18:44.527 "rw_ios_per_sec": 0, 00:18:44.527 "rw_mbytes_per_sec": 0, 00:18:44.527 "r_mbytes_per_sec": 0, 00:18:44.527 "w_mbytes_per_sec": 0 00:18:44.527 }, 00:18:44.527 "claimed": false, 00:18:44.527 "zoned": false, 00:18:44.527 "supported_io_types": { 00:18:44.527 "read": true, 00:18:44.527 "write": true, 00:18:44.527 "unmap": true, 00:18:44.527 "flush": false, 00:18:44.527 "reset": true, 00:18:44.527 "nvme_admin": false, 00:18:44.527 "nvme_io": false, 00:18:44.527 "nvme_io_md": false, 00:18:44.527 "write_zeroes": true, 00:18:44.527 "zcopy": false, 00:18:44.527 "get_zone_info": false, 00:18:44.527 "zone_management": false, 00:18:44.527 "zone_append": false, 00:18:44.527 "compare": false, 00:18:44.527 "compare_and_write": false, 00:18:44.527 "abort": false, 00:18:44.527 "seek_hole": true, 00:18:44.527 "seek_data": true, 00:18:44.527 "copy": false, 00:18:44.527 "nvme_iov_md": false 00:18:44.527 }, 00:18:44.527 "driver_specific": { 00:18:44.527 "lvol": { 00:18:44.527 "lvol_store_uuid": "010722c6-b48e-4964-82b5-321c8e3b1022", 00:18:44.527 "base_bdev": "nvme0n1", 00:18:44.527 "thin_provision": true, 00:18:44.527 "num_allocated_clusters": 0, 00:18:44.527 "snapshot": false, 00:18:44.527 "clone": false, 00:18:44.527 "esnap_clone": false 00:18:44.527 } 00:18:44.527 } 00:18:44.527 } 00:18:44.527 ]' 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:44.527 11:30:27 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:45.095 { 00:18:45.095 "name": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:45.095 "aliases": [ 00:18:45.095 "lvs/nvme0n1p0" 00:18:45.095 ], 00:18:45.095 "product_name": "Logical Volume", 00:18:45.095 "block_size": 4096, 00:18:45.095 "num_blocks": 26476544, 00:18:45.095 "uuid": "8678e2d3-d192-4e2e-a1b6-0bc2d9050568", 00:18:45.095 "assigned_rate_limits": { 00:18:45.095 "rw_ios_per_sec": 0, 00:18:45.095 "rw_mbytes_per_sec": 0, 00:18:45.095 "r_mbytes_per_sec": 0, 00:18:45.095 "w_mbytes_per_sec": 0 00:18:45.095 }, 00:18:45.095 "claimed": false, 00:18:45.095 "zoned": false, 00:18:45.095 "supported_io_types": { 00:18:45.095 "read": true, 00:18:45.095 "write": true, 00:18:45.095 "unmap": true, 00:18:45.095 "flush": false, 00:18:45.095 "reset": true, 00:18:45.095 "nvme_admin": false, 00:18:45.095 "nvme_io": false, 00:18:45.095 "nvme_io_md": false, 00:18:45.095 "write_zeroes": true, 00:18:45.095 "zcopy": false, 00:18:45.095 "get_zone_info": false, 00:18:45.095 "zone_management": false, 00:18:45.095 "zone_append": false, 00:18:45.095 "compare": false, 00:18:45.095 "compare_and_write": false, 00:18:45.095 "abort": false, 00:18:45.095 "seek_hole": true, 00:18:45.095 "seek_data": true, 00:18:45.095 "copy": false, 00:18:45.095 "nvme_iov_md": false 00:18:45.095 }, 00:18:45.095 "driver_specific": { 00:18:45.095 "lvol": { 00:18:45.095 "lvol_store_uuid": "010722c6-b48e-4964-82b5-321c8e3b1022", 00:18:45.095 "base_bdev": "nvme0n1", 00:18:45.095 "thin_provision": true, 00:18:45.095 "num_allocated_clusters": 0, 00:18:45.095 "snapshot": false, 00:18:45.095 "clone": false, 00:18:45.095 "esnap_clone": false 00:18:45.095 } 00:18:45.095 } 00:18:45.095 } 00:18:45.095 ]' 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:45.095 11:30:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:45.355 11:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:45.355 11:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:45.355 11:30:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:45.355 11:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:45.355 11:30:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8678e2d3-d192-4e2e-a1b6-0bc2d9050568 -c nvc0n1p0 --l2p_dram_limit 20 00:18:45.355 [2024-11-15 11:30:28.246834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.246909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:45.355 [2024-11-15 11:30:28.246930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:45.355 [2024-11-15 11:30:28.246944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.247019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.247057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.355 [2024-11-15 11:30:28.247119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:45.355 [2024-11-15 11:30:28.247134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.247162] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:45.355 [2024-11-15 11:30:28.248275] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:45.355 [2024-11-15 11:30:28.248317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.248336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.355 [2024-11-15 11:30:28.248363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:18:45.355 [2024-11-15 11:30:28.248391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.248540] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 88b2db2f-6c67-4013-aefa-70d8e8b07e5d 00:18:45.355 [2024-11-15 11:30:28.250505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.250542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:45.355 [2024-11-15 11:30:28.250578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:45.355 [2024-11-15 11:30:28.250595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.260798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.260837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.355 [2024-11-15 11:30:28.260873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.140 ms 00:18:45.355 [2024-11-15 11:30:28.260886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.261000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.261018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.355 [2024-11-15 11:30:28.261054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:45.355 [2024-11-15 11:30:28.261104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.261195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.261214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:45.355 [2024-11-15 11:30:28.261230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:45.355 [2024-11-15 11:30:28.261241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.261273] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:45.355 [2024-11-15 11:30:28.266510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.266584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.355 [2024-11-15 11:30:28.266600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.250 ms 00:18:45.355 [2024-11-15 11:30:28.266619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.266656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.355 [2024-11-15 11:30:28.266673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:45.355 [2024-11-15 11:30:28.266685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:45.355 [2024-11-15 11:30:28.266697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.355 [2024-11-15 11:30:28.266736] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:45.355 [2024-11-15 11:30:28.266903] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:45.355 [2024-11-15 11:30:28.266920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:45.355 [2024-11-15 11:30:28.266936] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:45.355 [2024-11-15 11:30:28.266949] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:45.355 [2024-11-15 11:30:28.266964] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:45.355 [2024-11-15 11:30:28.266975] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:45.355 [2024-11-15 11:30:28.266987] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:45.355 [2024-11-15 11:30:28.266996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:45.355 [2024-11-15 11:30:28.267008] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:45.355 [2024-11-15 11:30:28.267018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.356 [2024-11-15 11:30:28.267034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:45.356 [2024-11-15 11:30:28.267076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:45.356 [2024-11-15 11:30:28.267089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.356 [2024-11-15 11:30:28.267223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.356 [2024-11-15 11:30:28.267247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:45.356 [2024-11-15 11:30:28.267261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:45.356 [2024-11-15 11:30:28.267278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.356 [2024-11-15 11:30:28.267407] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:45.356 [2024-11-15 11:30:28.267444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:45.356 [2024-11-15 11:30:28.267474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:45.356 [2024-11-15 11:30:28.267530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:45.356 [2024-11-15 11:30:28.267565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.356 [2024-11-15 11:30:28.267587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:45.356 [2024-11-15 11:30:28.267600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:45.356 [2024-11-15 11:30:28.267610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:45.356 [2024-11-15 11:30:28.267635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:45.356 [2024-11-15 11:30:28.267646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:45.356 [2024-11-15 11:30:28.267661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:45.356 [2024-11-15 11:30:28.267683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:45.356 [2024-11-15 11:30:28.267718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:45.356 [2024-11-15 11:30:28.267753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:45.356 [2024-11-15 11:30:28.267790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:45.356 [2024-11-15 11:30:28.267825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:45.356 [2024-11-15 11:30:28.267851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:45.356 [2024-11-15 11:30:28.267861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.356 [2024-11-15 11:30:28.267884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:45.356 [2024-11-15 11:30:28.267897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:45.356 [2024-11-15 11:30:28.267907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:45.356 [2024-11-15 11:30:28.267920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:45.356 [2024-11-15 11:30:28.267930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:45.356 [2024-11-15 11:30:28.267943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:45.356 [2024-11-15 11:30:28.267966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:45.356 [2024-11-15 11:30:28.267976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.267988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:45.356 [2024-11-15 11:30:28.268000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:45.356 [2024-11-15 11:30:28.268013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:45.356 [2024-11-15 11:30:28.268024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:45.356 [2024-11-15 11:30:28.268073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:45.356 [2024-11-15 11:30:28.268085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:45.356 [2024-11-15 11:30:28.268098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:45.356 [2024-11-15 11:30:28.268110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:45.356 [2024-11-15 11:30:28.268123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:45.356 [2024-11-15 11:30:28.268134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:45.356 [2024-11-15 11:30:28.268167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:45.356 [2024-11-15 11:30:28.268186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:45.356 [2024-11-15 11:30:28.268214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:45.356 [2024-11-15 11:30:28.268229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:45.356 [2024-11-15 11:30:28.268241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:45.356 [2024-11-15 11:30:28.268255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:45.356 [2024-11-15 11:30:28.268268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:45.356 [2024-11-15 11:30:28.268282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:45.356 [2024-11-15 11:30:28.268294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:45.356 [2024-11-15 11:30:28.268310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:45.356 [2024-11-15 11:30:28.268322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:45.356 [2024-11-15 11:30:28.268402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:45.356 [2024-11-15 11:30:28.268415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:45.356 [2024-11-15 11:30:28.268458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:45.356 [2024-11-15 11:30:28.268472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:45.356 [2024-11-15 11:30:28.268483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:45.356 [2024-11-15 11:30:28.268498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.356 [2024-11-15 11:30:28.268512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:45.356 [2024-11-15 11:30:28.268526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:18:45.356 [2024-11-15 11:30:28.268537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.356 [2024-11-15 11:30:28.268589] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:45.356 [2024-11-15 11:30:28.268621] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:48.646 [2024-11-15 11:30:31.286605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.286691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:48.646 [2024-11-15 11:30:31.286735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3018.020 ms 00:18:48.646 [2024-11-15 11:30:31.286747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.646 [2024-11-15 11:30:31.320244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.320300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.646 [2024-11-15 11:30:31.320349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.259 ms 00:18:48.646 [2024-11-15 11:30:31.320361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.646 [2024-11-15 11:30:31.320522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.320540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:48.646 [2024-11-15 11:30:31.320557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:48.646 [2024-11-15 11:30:31.320567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.646 [2024-11-15 11:30:31.373083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.373137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.646 [2024-11-15 11:30:31.373175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.445 ms 00:18:48.646 [2024-11-15 11:30:31.373187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.646 [2024-11-15 11:30:31.373232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.373251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.646 [2024-11-15 11:30:31.373265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.646 [2024-11-15 11:30:31.373276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.646 [2024-11-15 11:30:31.373903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.646 [2024-11-15 11:30:31.373927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.646 [2024-11-15 11:30:31.373943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:18:48.646 [2024-11-15 11:30:31.373954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.374107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.374124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.647 [2024-11-15 11:30:31.374155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:18:48.647 [2024-11-15 11:30:31.374165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.390737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.390934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.647 [2024-11-15 11:30:31.390965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.533 ms 00:18:48.647 [2024-11-15 11:30:31.390978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.402972] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:48.647 [2024-11-15 11:30:31.410065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.410104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.647 [2024-11-15 11:30:31.410120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.959 ms 00:18:48.647 [2024-11-15 11:30:31.410132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.484976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.485116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:48.647 [2024-11-15 11:30:31.485140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.811 ms 00:18:48.647 [2024-11-15 11:30:31.485171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.485412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.485437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.647 [2024-11-15 11:30:31.485450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:18:48.647 [2024-11-15 11:30:31.485463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.510416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.510462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:48.647 [2024-11-15 11:30:31.510480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.864 ms 00:18:48.647 [2024-11-15 11:30:31.510493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.534552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.534597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:48.647 [2024-11-15 11:30:31.534614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.020 ms 00:18:48.647 [2024-11-15 11:30:31.534626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.647 [2024-11-15 11:30:31.535421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.647 [2024-11-15 11:30:31.535465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.647 [2024-11-15 11:30:31.535509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:18:48.647 [2024-11-15 11:30:31.535522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.609125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.609196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:48.906 [2024-11-15 11:30:31.609222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.561 ms 00:18:48.906 [2024-11-15 11:30:31.609236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.635212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.635258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:48.906 [2024-11-15 11:30:31.635278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.893 ms 00:18:48.906 [2024-11-15 11:30:31.635290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.659682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.659733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:48.906 [2024-11-15 11:30:31.659748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.352 ms 00:18:48.906 [2024-11-15 11:30:31.659760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.684538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.684585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.906 [2024-11-15 11:30:31.684601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.741 ms 00:18:48.906 [2024-11-15 11:30:31.684613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.684666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.684688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.906 [2024-11-15 11:30:31.684699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:48.906 [2024-11-15 11:30:31.684712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.684801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.906 [2024-11-15 11:30:31.684822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.906 [2024-11-15 11:30:31.684833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:48.906 [2024-11-15 11:30:31.684844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.906 [2024-11-15 11:30:31.686399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3438.950 ms, result 0 00:18:48.906 { 00:18:48.906 "name": "ftl0", 00:18:48.906 "uuid": "88b2db2f-6c67-4013-aefa-70d8e8b07e5d" 00:18:48.906 } 00:18:48.906 11:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:48.906 11:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:48.906 11:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:49.165 11:30:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:49.165 [2024-11-15 11:30:32.110312] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:49.424 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:49.424 Zero copy mechanism will not be used. 00:18:49.424 Running I/O for 4 seconds... 00:18:51.308 1637.00 IOPS, 108.71 MiB/s [2024-11-15T11:30:35.202Z] 1646.00 IOPS, 109.30 MiB/s [2024-11-15T11:30:36.139Z] 1661.67 IOPS, 110.35 MiB/s [2024-11-15T11:30:36.139Z] 1660.00 IOPS, 110.23 MiB/s 00:18:53.190 Latency(us) 00:18:53.190 [2024-11-15T11:30:36.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.190 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:53.190 ftl0 : 4.00 1659.42 110.20 0.00 0.00 634.48 269.96 4796.04 00:18:53.190 [2024-11-15T11:30:36.139Z] =================================================================================================================== 00:18:53.190 [2024-11-15T11:30:36.139Z] Total : 1659.42 110.20 0.00 0.00 634.48 269.96 4796.04 00:18:53.190 [2024-11-15 11:30:36.121828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:53.190 { 00:18:53.190 "results": [ 00:18:53.190 { 00:18:53.191 "job": "ftl0", 00:18:53.191 "core_mask": "0x1", 00:18:53.191 "workload": "randwrite", 00:18:53.191 "status": "finished", 00:18:53.191 "queue_depth": 1, 00:18:53.191 "io_size": 69632, 00:18:53.191 "runtime": 4.001994, 00:18:53.191 "iops": 1659.422777745294, 00:18:53.191 "mibps": 110.19604383464844, 00:18:53.191 "io_failed": 0, 00:18:53.191 "io_timeout": 0, 00:18:53.191 "avg_latency_us": 634.4769938809873, 00:18:53.191 "min_latency_us": 269.96363636363634, 00:18:53.191 "max_latency_us": 4796.043636363636 00:18:53.191 } 00:18:53.191 ], 00:18:53.191 "core_count": 1 00:18:53.191 } 00:18:53.450 11:30:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:53.450 [2024-11-15 11:30:36.246453] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:53.450 Running I/O for 4 seconds... 00:18:55.322 7912.00 IOPS, 30.91 MiB/s [2024-11-15T11:30:39.653Z] 7748.50 IOPS, 30.27 MiB/s [2024-11-15T11:30:40.588Z] 7703.00 IOPS, 30.09 MiB/s [2024-11-15T11:30:40.588Z] 7670.75 IOPS, 29.96 MiB/s 00:18:57.639 Latency(us) 00:18:57.639 [2024-11-15T11:30:40.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.639 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.639 ftl0 : 4.02 7664.20 29.94 0.00 0.00 16657.59 323.96 25856.93 00:18:57.639 [2024-11-15T11:30:40.588Z] =================================================================================================================== 00:18:57.639 [2024-11-15T11:30:40.588Z] Total : 7664.20 29.94 0.00 0.00 16657.59 0.00 25856.93 00:18:57.639 [2024-11-15 11:30:40.275064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:57.639 { 00:18:57.639 "results": [ 00:18:57.639 { 00:18:57.639 "job": "ftl0", 00:18:57.639 "core_mask": "0x1", 00:18:57.639 "workload": "randwrite", 00:18:57.639 "status": "finished", 00:18:57.639 "queue_depth": 128, 00:18:57.639 "io_size": 4096, 00:18:57.639 "runtime": 4.01986, 00:18:57.639 "iops": 7664.197260601116, 00:18:57.639 "mibps": 29.93827054922311, 00:18:57.639 "io_failed": 0, 00:18:57.639 "io_timeout": 0, 00:18:57.639 "avg_latency_us": 16657.58692707857, 00:18:57.639 "min_latency_us": 323.9563636363636, 00:18:57.639 "max_latency_us": 25856.93090909091 00:18:57.639 } 00:18:57.639 ], 00:18:57.639 "core_count": 1 00:18:57.639 } 00:18:57.639 11:30:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:57.639 [2024-11-15 11:30:40.423719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:57.639 Running I/O for 4 seconds... 00:18:59.510 4716.00 IOPS, 18.42 MiB/s [2024-11-15T11:30:43.836Z] 4819.00 IOPS, 18.82 MiB/s [2024-11-15T11:30:44.771Z] 4835.33 IOPS, 18.89 MiB/s [2024-11-15T11:30:44.771Z] 4847.25 IOPS, 18.93 MiB/s 00:19:01.822 Latency(us) 00:19:01.822 [2024-11-15T11:30:44.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.822 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.822 Verification LBA range: start 0x0 length 0x1400000 00:19:01.822 ftl0 : 4.02 4858.62 18.98 0.00 0.00 26240.19 366.78 28716.68 00:19:01.822 [2024-11-15T11:30:44.771Z] =================================================================================================================== 00:19:01.822 [2024-11-15T11:30:44.771Z] Total : 4858.62 18.98 0.00 0.00 26240.19 0.00 28716.68 00:19:01.822 [2024-11-15 11:30:44.456500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:01.822 { 00:19:01.822 "results": [ 00:19:01.822 { 00:19:01.822 "job": "ftl0", 00:19:01.822 "core_mask": "0x1", 00:19:01.822 "workload": "verify", 00:19:01.822 "status": "finished", 00:19:01.822 "verify_range": { 00:19:01.823 "start": 0, 00:19:01.823 "length": 20971520 00:19:01.823 }, 00:19:01.823 "queue_depth": 128, 00:19:01.823 "io_size": 4096, 00:19:01.823 "runtime": 4.016782, 00:19:01.823 "iops": 4858.615677923273, 00:19:01.823 "mibps": 18.978967491887786, 00:19:01.823 "io_failed": 0, 00:19:01.823 "io_timeout": 0, 00:19:01.823 "avg_latency_us": 26240.18941586391, 00:19:01.823 "min_latency_us": 366.7781818181818, 00:19:01.823 "max_latency_us": 28716.683636363636 00:19:01.823 } 00:19:01.823 ], 00:19:01.823 "core_count": 1 00:19:01.823 } 00:19:01.823 11:30:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:01.823 [2024-11-15 11:30:44.708490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.823 [2024-11-15 11:30:44.708702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.823 [2024-11-15 11:30:44.708731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:01.823 [2024-11-15 11:30:44.708747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.823 [2024-11-15 11:30:44.708783] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.823 [2024-11-15 11:30:44.712015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.823 [2024-11-15 11:30:44.712215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.823 [2024-11-15 11:30:44.712246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:19:01.823 [2024-11-15 11:30:44.712258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.823 [2024-11-15 11:30:44.714085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.823 [2024-11-15 11:30:44.714154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.823 [2024-11-15 11:30:44.714172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:19:01.823 [2024-11-15 11:30:44.714185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.080 [2024-11-15 11:30:44.890052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.080 [2024-11-15 11:30:44.890150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.080 [2024-11-15 11:30:44.890177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 175.825 ms 00:19:02.080 [2024-11-15 11:30:44.890189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.080 [2024-11-15 11:30:44.895505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.080 [2024-11-15 11:30:44.895661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:02.080 [2024-11-15 11:30:44.895707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.272 ms 00:19:02.080 [2024-11-15 11:30:44.895720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.080 [2024-11-15 11:30:44.920974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.080 [2024-11-15 11:30:44.921014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.080 [2024-11-15 11:30:44.921070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:19:02.081 [2024-11-15 11:30:44.921101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.081 [2024-11-15 11:30:44.936905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.081 [2024-11-15 11:30:44.937159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.081 [2024-11-15 11:30:44.937195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.758 ms 00:19:02.081 [2024-11-15 11:30:44.937208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.081 [2024-11-15 11:30:44.937391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.081 [2024-11-15 11:30:44.937426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.081 [2024-11-15 11:30:44.937459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:02.081 [2024-11-15 11:30:44.937470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.081 [2024-11-15 11:30:44.962154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.081 [2024-11-15 11:30:44.962193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:02.081 [2024-11-15 11:30:44.962211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.646 ms 00:19:02.081 [2024-11-15 11:30:44.962221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.081 [2024-11-15 11:30:44.986384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.081 [2024-11-15 11:30:44.986422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:02.081 [2024-11-15 11:30:44.986440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.120 ms 00:19:02.081 [2024-11-15 11:30:44.986450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.081 [2024-11-15 11:30:45.010257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.081 [2024-11-15 11:30:45.010295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.081 [2024-11-15 11:30:45.010313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.765 ms 00:19:02.081 [2024-11-15 11:30:45.010323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.340 [2024-11-15 11:30:45.034233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.340 [2024-11-15 11:30:45.034271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.340 [2024-11-15 11:30:45.034307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.831 ms 00:19:02.340 [2024-11-15 11:30:45.034317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.340 [2024-11-15 11:30:45.034358] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.340 [2024-11-15 11:30:45.034380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.340 [2024-11-15 11:30:45.034395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.340 [2024-11-15 11:30:45.034405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.034998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.341 [2024-11-15 11:30:45.035550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.342 [2024-11-15 11:30:45.035679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.342 [2024-11-15 11:30:45.035705] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 88b2db2f-6c67-4013-aefa-70d8e8b07e5d 00:19:02.342 [2024-11-15 11:30:45.035717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.342 [2024-11-15 11:30:45.035733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.342 [2024-11-15 11:30:45.035742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.342 [2024-11-15 11:30:45.035755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.342 [2024-11-15 11:30:45.035765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.342 [2024-11-15 11:30:45.035778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.342 [2024-11-15 11:30:45.035789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.342 [2024-11-15 11:30:45.035802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.342 [2024-11-15 11:30:45.035812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.342 [2024-11-15 11:30:45.035825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.342 [2024-11-15 11:30:45.035835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.342 [2024-11-15 11:30:45.035849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:19:02.342 [2024-11-15 11:30:45.035874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.051322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.342 [2024-11-15 11:30:45.051538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.342 [2024-11-15 11:30:45.051570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.402 ms 00:19:02.342 [2024-11-15 11:30:45.051592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.052134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.342 [2024-11-15 11:30:45.052160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.342 [2024-11-15 11:30:45.052177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:19:02.342 [2024-11-15 11:30:45.052188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.094697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.094739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.342 [2024-11-15 11:30:45.094776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.094787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.094850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.094864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.342 [2024-11-15 11:30:45.094877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.094887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.095031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.095102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.342 [2024-11-15 11:30:45.095120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.095131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.095158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.095171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.342 [2024-11-15 11:30:45.095200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.095211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.183425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.183481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.342 [2024-11-15 11:30:45.183521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.183532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.259867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.342 [2024-11-15 11:30:45.260125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.342 [2024-11-15 11:30:45.260279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.342 [2024-11-15 11:30:45.260421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.342 [2024-11-15 11:30:45.260652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.342 [2024-11-15 11:30:45.260738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.342 [2024-11-15 11:30:45.260820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.260879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.342 [2024-11-15 11:30:45.260904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.342 [2024-11-15 11:30:45.260918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.342 [2024-11-15 11:30:45.260928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.342 [2024-11-15 11:30:45.261137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.532 ms, result 0 00:19:02.342 true 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75110 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 75110 ']' 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 75110 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75110 00:19:02.601 killing process with pid 75110 00:19:02.601 Received shutdown signal, test time was about 4.000000 seconds 00:19:02.601 00:19:02.601 Latency(us) 00:19:02.601 [2024-11-15T11:30:45.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.601 [2024-11-15T11:30:45.550Z] =================================================================================================================== 00:19:02.601 [2024-11-15T11:30:45.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75110' 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 75110 00:19:02.601 11:30:45 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 75110 00:19:05.888 Remove shared memory files 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:05.888 ************************************ 00:19:05.888 END TEST ftl_bdevperf 00:19:05.888 ************************************ 00:19:05.888 00:19:05.888 real 0m25.342s 00:19:05.888 user 0m28.648s 00:19:05.888 sys 0m1.187s 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:05.888 11:30:48 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:06.147 11:30:48 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.147 11:30:48 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:06.147 11:30:48 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:06.147 11:30:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:06.147 ************************************ 00:19:06.147 START TEST ftl_trim 00:19:06.147 ************************************ 00:19:06.147 11:30:48 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.147 * Looking for test storage... 00:19:06.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.147 11:30:48 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:06.147 11:30:48 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:19:06.147 11:30:48 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:06.147 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.148 11:30:49 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:06.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.148 --rc genhtml_branch_coverage=1 00:19:06.148 --rc genhtml_function_coverage=1 00:19:06.148 --rc genhtml_legend=1 00:19:06.148 --rc geninfo_all_blocks=1 00:19:06.148 --rc geninfo_unexecuted_blocks=1 00:19:06.148 00:19:06.148 ' 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:06.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.148 --rc genhtml_branch_coverage=1 00:19:06.148 --rc genhtml_function_coverage=1 00:19:06.148 --rc genhtml_legend=1 00:19:06.148 --rc geninfo_all_blocks=1 00:19:06.148 --rc geninfo_unexecuted_blocks=1 00:19:06.148 00:19:06.148 ' 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:06.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.148 --rc genhtml_branch_coverage=1 00:19:06.148 --rc genhtml_function_coverage=1 00:19:06.148 --rc genhtml_legend=1 00:19:06.148 --rc geninfo_all_blocks=1 00:19:06.148 --rc geninfo_unexecuted_blocks=1 00:19:06.148 00:19:06.148 ' 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:06.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.148 --rc genhtml_branch_coverage=1 00:19:06.148 --rc genhtml_function_coverage=1 00:19:06.148 --rc genhtml_legend=1 00:19:06.148 --rc geninfo_all_blocks=1 00:19:06.148 --rc geninfo_unexecuted_blocks=1 00:19:06.148 00:19:06.148 ' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75471 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:06.148 11:30:49 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75471 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75471 ']' 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.148 11:30:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:06.407 [2024-11-15 11:30:49.209549] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:06.407 [2024-11-15 11:30:49.209900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75471 ] 00:19:06.666 [2024-11-15 11:30:49.394337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.666 [2024-11-15 11:30:49.514121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.666 [2024-11-15 11:30:49.514232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.666 [2024-11-15 11:30:49.514265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.601 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.601 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:07.601 11:30:50 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.860 11:30:50 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.860 11:30:50 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:07.860 11:30:50 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.860 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:07.860 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:07.860 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:07.860 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:07.860 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:08.119 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:08.119 { 00:19:08.119 "name": "nvme0n1", 00:19:08.119 "aliases": [ 00:19:08.119 "725b5f3b-0f38-4164-a6b4-5c7f82cd6ca6" 00:19:08.120 ], 00:19:08.120 "product_name": "NVMe disk", 00:19:08.120 "block_size": 4096, 00:19:08.120 "num_blocks": 1310720, 00:19:08.120 "uuid": "725b5f3b-0f38-4164-a6b4-5c7f82cd6ca6", 00:19:08.120 "numa_id": -1, 00:19:08.120 "assigned_rate_limits": { 00:19:08.120 "rw_ios_per_sec": 0, 00:19:08.120 "rw_mbytes_per_sec": 0, 00:19:08.120 "r_mbytes_per_sec": 0, 00:19:08.120 "w_mbytes_per_sec": 0 00:19:08.120 }, 00:19:08.120 "claimed": true, 00:19:08.120 "claim_type": "read_many_write_one", 00:19:08.120 "zoned": false, 00:19:08.120 "supported_io_types": { 00:19:08.120 "read": true, 00:19:08.120 "write": true, 00:19:08.120 "unmap": true, 00:19:08.120 "flush": true, 00:19:08.120 "reset": true, 00:19:08.120 "nvme_admin": true, 00:19:08.120 "nvme_io": true, 00:19:08.120 "nvme_io_md": false, 00:19:08.120 "write_zeroes": true, 00:19:08.120 "zcopy": false, 00:19:08.120 "get_zone_info": false, 00:19:08.120 "zone_management": false, 00:19:08.120 "zone_append": false, 00:19:08.120 "compare": true, 00:19:08.120 "compare_and_write": false, 00:19:08.120 "abort": true, 00:19:08.120 "seek_hole": false, 00:19:08.120 "seek_data": false, 00:19:08.120 "copy": true, 00:19:08.120 "nvme_iov_md": false 00:19:08.120 }, 00:19:08.120 "driver_specific": { 00:19:08.120 "nvme": [ 00:19:08.120 { 00:19:08.120 "pci_address": "0000:00:11.0", 00:19:08.120 "trid": { 00:19:08.120 "trtype": "PCIe", 00:19:08.120 "traddr": "0000:00:11.0" 00:19:08.120 }, 00:19:08.120 "ctrlr_data": { 00:19:08.120 "cntlid": 0, 00:19:08.120 "vendor_id": "0x1b36", 00:19:08.120 "model_number": "QEMU NVMe Ctrl", 00:19:08.120 "serial_number": "12341", 00:19:08.120 "firmware_revision": "8.0.0", 00:19:08.120 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:08.120 "oacs": { 00:19:08.120 "security": 0, 00:19:08.120 "format": 1, 00:19:08.120 "firmware": 0, 00:19:08.120 "ns_manage": 1 00:19:08.120 }, 00:19:08.120 "multi_ctrlr": false, 00:19:08.120 "ana_reporting": false 00:19:08.120 }, 00:19:08.120 "vs": { 00:19:08.120 "nvme_version": "1.4" 00:19:08.120 }, 00:19:08.120 "ns_data": { 00:19:08.120 "id": 1, 00:19:08.120 "can_share": false 00:19:08.120 } 00:19:08.120 } 00:19:08.120 ], 00:19:08.120 "mp_policy": "active_passive" 00:19:08.120 } 00:19:08.120 } 00:19:08.120 ]' 00:19:08.120 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:08.120 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:08.120 11:30:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:08.120 11:30:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:08.120 11:30:51 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:08.120 11:30:51 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:19:08.120 11:30:51 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:08.120 11:30:51 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:08.120 11:30:51 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:08.120 11:30:51 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.120 11:30:51 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:08.379 11:30:51 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=010722c6-b48e-4964-82b5-321c8e3b1022 00:19:08.379 11:30:51 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.379 11:30:51 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 010722c6-b48e-4964-82b5-321c8e3b1022 00:19:08.637 11:30:51 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:09.204 11:30:51 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=52e40787-0334-4e6a-84ab-dcde8abd22d1 00:19:09.204 11:30:51 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52e40787-0334-4e6a-84ab-dcde8abd22d1 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:09.204 11:30:52 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.204 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.204 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:09.204 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:09.204 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:09.462 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.462 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:09.462 { 00:19:09.462 "name": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:09.462 "aliases": [ 00:19:09.462 "lvs/nvme0n1p0" 00:19:09.462 ], 00:19:09.462 "product_name": "Logical Volume", 00:19:09.462 "block_size": 4096, 00:19:09.462 "num_blocks": 26476544, 00:19:09.462 "uuid": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:09.462 "assigned_rate_limits": { 00:19:09.462 "rw_ios_per_sec": 0, 00:19:09.462 "rw_mbytes_per_sec": 0, 00:19:09.462 "r_mbytes_per_sec": 0, 00:19:09.462 "w_mbytes_per_sec": 0 00:19:09.462 }, 00:19:09.462 "claimed": false, 00:19:09.462 "zoned": false, 00:19:09.462 "supported_io_types": { 00:19:09.462 "read": true, 00:19:09.462 "write": true, 00:19:09.462 "unmap": true, 00:19:09.462 "flush": false, 00:19:09.462 "reset": true, 00:19:09.462 "nvme_admin": false, 00:19:09.462 "nvme_io": false, 00:19:09.462 "nvme_io_md": false, 00:19:09.462 "write_zeroes": true, 00:19:09.462 "zcopy": false, 00:19:09.462 "get_zone_info": false, 00:19:09.462 "zone_management": false, 00:19:09.462 "zone_append": false, 00:19:09.462 "compare": false, 00:19:09.462 "compare_and_write": false, 00:19:09.462 "abort": false, 00:19:09.462 "seek_hole": true, 00:19:09.462 "seek_data": true, 00:19:09.462 "copy": false, 00:19:09.462 "nvme_iov_md": false 00:19:09.462 }, 00:19:09.462 "driver_specific": { 00:19:09.462 "lvol": { 00:19:09.462 "lvol_store_uuid": "52e40787-0334-4e6a-84ab-dcde8abd22d1", 00:19:09.462 "base_bdev": "nvme0n1", 00:19:09.462 "thin_provision": true, 00:19:09.462 "num_allocated_clusters": 0, 00:19:09.462 "snapshot": false, 00:19:09.462 "clone": false, 00:19:09.462 "esnap_clone": false 00:19:09.462 } 00:19:09.462 } 00:19:09.462 } 00:19:09.462 ]' 00:19:09.462 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:09.720 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:09.720 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:09.720 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:09.720 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:09.720 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:09.720 11:30:52 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:09.720 11:30:52 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:09.720 11:30:52 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:09.979 11:30:52 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.979 11:30:52 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.979 11:30:52 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.979 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:09.979 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:09.979 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:09.979 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:09.979 11:30:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:10.237 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:10.237 { 00:19:10.237 "name": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:10.237 "aliases": [ 00:19:10.237 "lvs/nvme0n1p0" 00:19:10.237 ], 00:19:10.237 "product_name": "Logical Volume", 00:19:10.237 "block_size": 4096, 00:19:10.237 "num_blocks": 26476544, 00:19:10.237 "uuid": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:10.237 "assigned_rate_limits": { 00:19:10.237 "rw_ios_per_sec": 0, 00:19:10.237 "rw_mbytes_per_sec": 0, 00:19:10.237 "r_mbytes_per_sec": 0, 00:19:10.237 "w_mbytes_per_sec": 0 00:19:10.237 }, 00:19:10.237 "claimed": false, 00:19:10.237 "zoned": false, 00:19:10.237 "supported_io_types": { 00:19:10.237 "read": true, 00:19:10.237 "write": true, 00:19:10.237 "unmap": true, 00:19:10.237 "flush": false, 00:19:10.237 "reset": true, 00:19:10.237 "nvme_admin": false, 00:19:10.237 "nvme_io": false, 00:19:10.237 "nvme_io_md": false, 00:19:10.237 "write_zeroes": true, 00:19:10.237 "zcopy": false, 00:19:10.237 "get_zone_info": false, 00:19:10.237 "zone_management": false, 00:19:10.237 "zone_append": false, 00:19:10.237 "compare": false, 00:19:10.237 "compare_and_write": false, 00:19:10.237 "abort": false, 00:19:10.237 "seek_hole": true, 00:19:10.237 "seek_data": true, 00:19:10.237 "copy": false, 00:19:10.237 "nvme_iov_md": false 00:19:10.237 }, 00:19:10.237 "driver_specific": { 00:19:10.237 "lvol": { 00:19:10.237 "lvol_store_uuid": "52e40787-0334-4e6a-84ab-dcde8abd22d1", 00:19:10.237 "base_bdev": "nvme0n1", 00:19:10.237 "thin_provision": true, 00:19:10.237 "num_allocated_clusters": 0, 00:19:10.237 "snapshot": false, 00:19:10.237 "clone": false, 00:19:10.237 "esnap_clone": false 00:19:10.237 } 00:19:10.237 } 00:19:10.237 } 00:19:10.237 ]' 00:19:10.237 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:10.237 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:10.237 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:10.495 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:10.495 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:10.495 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:10.495 11:30:53 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:10.495 11:30:53 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:10.495 11:30:53 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:10.495 11:30:53 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:10.754 11:30:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84340e5c-ede7-44d6-b236-0a1f4151719c 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:10.754 { 00:19:10.754 "name": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:10.754 "aliases": [ 00:19:10.754 "lvs/nvme0n1p0" 00:19:10.754 ], 00:19:10.754 "product_name": "Logical Volume", 00:19:10.754 "block_size": 4096, 00:19:10.754 "num_blocks": 26476544, 00:19:10.754 "uuid": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:10.754 "assigned_rate_limits": { 00:19:10.754 "rw_ios_per_sec": 0, 00:19:10.754 "rw_mbytes_per_sec": 0, 00:19:10.754 "r_mbytes_per_sec": 0, 00:19:10.754 "w_mbytes_per_sec": 0 00:19:10.754 }, 00:19:10.754 "claimed": false, 00:19:10.754 "zoned": false, 00:19:10.754 "supported_io_types": { 00:19:10.754 "read": true, 00:19:10.754 "write": true, 00:19:10.754 "unmap": true, 00:19:10.754 "flush": false, 00:19:10.754 "reset": true, 00:19:10.754 "nvme_admin": false, 00:19:10.754 "nvme_io": false, 00:19:10.754 "nvme_io_md": false, 00:19:10.754 "write_zeroes": true, 00:19:10.754 "zcopy": false, 00:19:10.754 "get_zone_info": false, 00:19:10.754 "zone_management": false, 00:19:10.754 "zone_append": false, 00:19:10.754 "compare": false, 00:19:10.754 "compare_and_write": false, 00:19:10.754 "abort": false, 00:19:10.754 "seek_hole": true, 00:19:10.754 "seek_data": true, 00:19:10.754 "copy": false, 00:19:10.754 "nvme_iov_md": false 00:19:10.754 }, 00:19:10.754 "driver_specific": { 00:19:10.754 "lvol": { 00:19:10.754 "lvol_store_uuid": "52e40787-0334-4e6a-84ab-dcde8abd22d1", 00:19:10.754 "base_bdev": "nvme0n1", 00:19:10.754 "thin_provision": true, 00:19:10.754 "num_allocated_clusters": 0, 00:19:10.754 "snapshot": false, 00:19:10.754 "clone": false, 00:19:10.754 "esnap_clone": false 00:19:10.754 } 00:19:10.754 } 00:19:10.754 } 00:19:10.754 ]' 00:19:10.754 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:11.012 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:11.012 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:11.012 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:11.012 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:11.012 11:30:53 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:11.012 11:30:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:11.012 11:30:53 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 84340e5c-ede7-44d6-b236-0a1f4151719c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:11.272 [2024-11-15 11:30:53.994654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:53.994728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:11.272 [2024-11-15 11:30:53.994770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:11.272 [2024-11-15 11:30:53.994783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:53.998506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:53.998546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.272 [2024-11-15 11:30:53.998582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:19:11.272 [2024-11-15 11:30:53.998593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:53.998760] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:11.272 [2024-11-15 11:30:53.999786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:11.272 [2024-11-15 11:30:53.999847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:53.999862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.272 [2024-11-15 11:30:53.999877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:19:11.272 [2024-11-15 11:30:53.999888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.000181] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:11.272 [2024-11-15 11:30:54.002182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.002225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:11.272 [2024-11-15 11:30:54.002241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:11.272 [2024-11-15 11:30:54.002255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.011835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.011886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.272 [2024-11-15 11:30:54.011925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.490 ms 00:19:11.272 [2024-11-15 11:30:54.011938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.012148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.012190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.272 [2024-11-15 11:30:54.012205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:19:11.272 [2024-11-15 11:30:54.012223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.012271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.012289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:11.272 [2024-11-15 11:30:54.012301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:11.272 [2024-11-15 11:30:54.012318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.012360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:11.272 [2024-11-15 11:30:54.017226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.017443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.272 [2024-11-15 11:30:54.017479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.872 ms 00:19:11.272 [2024-11-15 11:30:54.017493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.017576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.017594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:11.272 [2024-11-15 11:30:54.017617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:11.272 [2024-11-15 11:30:54.017650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.017693] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:11.272 [2024-11-15 11:30:54.017859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:11.272 [2024-11-15 11:30:54.017884] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:11.272 [2024-11-15 11:30:54.017900] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:11.272 [2024-11-15 11:30:54.017917] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:11.272 [2024-11-15 11:30:54.017930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:11.272 [2024-11-15 11:30:54.017945] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:11.272 [2024-11-15 11:30:54.017957] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:11.272 [2024-11-15 11:30:54.017985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:11.272 [2024-11-15 11:30:54.018010] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:11.272 [2024-11-15 11:30:54.018024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.018045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:11.272 [2024-11-15 11:30:54.018059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:19:11.272 [2024-11-15 11:30:54.018071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.018189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.272 [2024-11-15 11:30:54.018205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:11.272 [2024-11-15 11:30:54.018218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:11.272 [2024-11-15 11:30:54.018229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.272 [2024-11-15 11:30:54.018399] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:11.272 [2024-11-15 11:30:54.018414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:11.272 [2024-11-15 11:30:54.018429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.272 [2024-11-15 11:30:54.018440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:11.272 [2024-11-15 11:30:54.018464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:11.272 [2024-11-15 11:30:54.018489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:11.272 [2024-11-15 11:30:54.018501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.272 [2024-11-15 11:30:54.018524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:11.272 [2024-11-15 11:30:54.018535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:11.272 [2024-11-15 11:30:54.018547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:11.272 [2024-11-15 11:30:54.018558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:11.272 [2024-11-15 11:30:54.018571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:11.272 [2024-11-15 11:30:54.018582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:11.272 [2024-11-15 11:30:54.018607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:11.272 [2024-11-15 11:30:54.018619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:11.272 [2024-11-15 11:30:54.018645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:11.272 [2024-11-15 11:30:54.018657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.273 [2024-11-15 11:30:54.018671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:11.273 [2024-11-15 11:30:54.018682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.273 [2024-11-15 11:30:54.018705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:11.273 [2024-11-15 11:30:54.018718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.273 [2024-11-15 11:30:54.018763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:11.273 [2024-11-15 11:30:54.018773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:11.273 [2024-11-15 11:30:54.018812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:11.273 [2024-11-15 11:30:54.018827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.273 [2024-11-15 11:30:54.018850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:11.273 [2024-11-15 11:30:54.018861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:11.273 [2024-11-15 11:30:54.018873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:11.273 [2024-11-15 11:30:54.018884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:11.273 [2024-11-15 11:30:54.018898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:11.273 [2024-11-15 11:30:54.018908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:11.273 [2024-11-15 11:30:54.018932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:11.273 [2024-11-15 11:30:54.018945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.273 [2024-11-15 11:30:54.018955] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:11.273 [2024-11-15 11:30:54.018969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:11.273 [2024-11-15 11:30:54.018981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:11.273 [2024-11-15 11:30:54.018994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:11.273 [2024-11-15 11:30:54.019005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:11.273 [2024-11-15 11:30:54.019023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:11.273 [2024-11-15 11:30:54.019034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:11.273 [2024-11-15 11:30:54.019047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:11.273 [2024-11-15 11:30:54.019057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:11.273 [2024-11-15 11:30:54.019070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:11.273 [2024-11-15 11:30:54.019098] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:11.273 [2024-11-15 11:30:54.019118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:11.273 [2024-11-15 11:30:54.019148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:11.273 [2024-11-15 11:30:54.019159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:11.273 [2024-11-15 11:30:54.019187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:11.273 [2024-11-15 11:30:54.019198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:11.273 [2024-11-15 11:30:54.019211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:11.273 [2024-11-15 11:30:54.019222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:11.273 [2024-11-15 11:30:54.019250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:11.273 [2024-11-15 11:30:54.019261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:11.273 [2024-11-15 11:30:54.019278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:11.273 [2024-11-15 11:30:54.019337] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:11.273 [2024-11-15 11:30:54.019359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:11.273 [2024-11-15 11:30:54.019385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:11.273 [2024-11-15 11:30:54.019396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:11.273 [2024-11-15 11:30:54.019410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:11.273 [2024-11-15 11:30:54.019423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.273 [2024-11-15 11:30:54.019436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:11.273 [2024-11-15 11:30:54.019448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:19:11.273 [2024-11-15 11:30:54.019473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.273 [2024-11-15 11:30:54.019567] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:11.273 [2024-11-15 11:30:54.019589] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:14.561 [2024-11-15 11:30:56.920751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:56.920839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:14.561 [2024-11-15 11:30:56.920858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2901.194 ms 00:19:14.561 [2024-11-15 11:30:56.920873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:56.955762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:56.955839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.561 [2024-11-15 11:30:56.955859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.529 ms 00:19:14.561 [2024-11-15 11:30:56.955873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:56.956095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:56.956119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.561 [2024-11-15 11:30:56.956133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:14.561 [2024-11-15 11:30:56.956149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.004307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.004379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.561 [2024-11-15 11:30:57.004397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.074 ms 00:19:14.561 [2024-11-15 11:30:57.004413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.004526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.004548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.561 [2024-11-15 11:30:57.004562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.561 [2024-11-15 11:30:57.004575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.005205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.005231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.561 [2024-11-15 11:30:57.005245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:19:14.561 [2024-11-15 11:30:57.005259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.005448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.005498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.561 [2024-11-15 11:30:57.005510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:14.561 [2024-11-15 11:30:57.005526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.024731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.024797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.561 [2024-11-15 11:30:57.024813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.095 ms 00:19:14.561 [2024-11-15 11:30:57.024826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.037558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:14.561 [2024-11-15 11:30:57.058027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.058390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.561 [2024-11-15 11:30:57.058429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.062 ms 00:19:14.561 [2024-11-15 11:30:57.058443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.135247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.135600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:14.561 [2024-11-15 11:30:57.135638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.635 ms 00:19:14.561 [2024-11-15 11:30:57.135653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.135952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.135972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.561 [2024-11-15 11:30:57.135991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:19:14.561 [2024-11-15 11:30:57.136003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.163034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.163089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:14.561 [2024-11-15 11:30:57.163124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.984 ms 00:19:14.561 [2024-11-15 11:30:57.163136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.190930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.191160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:14.561 [2024-11-15 11:30:57.191196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.676 ms 00:19:14.561 [2024-11-15 11:30:57.191209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.192212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.192248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.561 [2024-11-15 11:30:57.192267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:19:14.561 [2024-11-15 11:30:57.192279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.271203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.271259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:14.561 [2024-11-15 11:30:57.271298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.875 ms 00:19:14.561 [2024-11-15 11:30:57.271310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.300202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.300242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:14.561 [2024-11-15 11:30:57.300276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.744 ms 00:19:14.561 [2024-11-15 11:30:57.300288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.326888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.326925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:14.561 [2024-11-15 11:30:57.326959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.508 ms 00:19:14.561 [2024-11-15 11:30:57.326970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.354700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.354921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.561 [2024-11-15 11:30:57.354970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.612 ms 00:19:14.561 [2024-11-15 11:30:57.355002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.355125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.355148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.561 [2024-11-15 11:30:57.355167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:14.561 [2024-11-15 11:30:57.355180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.355291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.561 [2024-11-15 11:30:57.355306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.561 [2024-11-15 11:30:57.355319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:14.561 [2024-11-15 11:30:57.355330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.561 [2024-11-15 11:30:57.356675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.562 [2024-11-15 11:30:57.360339] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3361.601 ms, result 0 00:19:14.562 [2024-11-15 11:30:57.361340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:14.562 { 00:19:14.562 "name": "ftl0", 00:19:14.562 "uuid": "8de081b6-03aa-47ad-91df-ca5aa125d9cd" 00:19:14.562 } 00:19:14.562 11:30:57 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:14.562 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:14.821 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:15.079 [ 00:19:15.079 { 00:19:15.079 "name": "ftl0", 00:19:15.079 "aliases": [ 00:19:15.079 "8de081b6-03aa-47ad-91df-ca5aa125d9cd" 00:19:15.079 ], 00:19:15.079 "product_name": "FTL disk", 00:19:15.079 "block_size": 4096, 00:19:15.080 "num_blocks": 23592960, 00:19:15.080 "uuid": "8de081b6-03aa-47ad-91df-ca5aa125d9cd", 00:19:15.080 "assigned_rate_limits": { 00:19:15.080 "rw_ios_per_sec": 0, 00:19:15.080 "rw_mbytes_per_sec": 0, 00:19:15.080 "r_mbytes_per_sec": 0, 00:19:15.080 "w_mbytes_per_sec": 0 00:19:15.080 }, 00:19:15.080 "claimed": false, 00:19:15.080 "zoned": false, 00:19:15.080 "supported_io_types": { 00:19:15.080 "read": true, 00:19:15.080 "write": true, 00:19:15.080 "unmap": true, 00:19:15.080 "flush": true, 00:19:15.080 "reset": false, 00:19:15.080 "nvme_admin": false, 00:19:15.080 "nvme_io": false, 00:19:15.080 "nvme_io_md": false, 00:19:15.080 "write_zeroes": true, 00:19:15.080 "zcopy": false, 00:19:15.080 "get_zone_info": false, 00:19:15.080 "zone_management": false, 00:19:15.080 "zone_append": false, 00:19:15.080 "compare": false, 00:19:15.080 "compare_and_write": false, 00:19:15.080 "abort": false, 00:19:15.080 "seek_hole": false, 00:19:15.080 "seek_data": false, 00:19:15.080 "copy": false, 00:19:15.080 "nvme_iov_md": false 00:19:15.080 }, 00:19:15.080 "driver_specific": { 00:19:15.080 "ftl": { 00:19:15.080 "base_bdev": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:15.080 "cache": "nvc0n1p0" 00:19:15.080 } 00:19:15.080 } 00:19:15.080 } 00:19:15.080 ] 00:19:15.080 11:30:57 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:19:15.080 11:30:57 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:15.080 11:30:57 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:15.339 11:30:58 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:15.339 11:30:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:15.598 11:30:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:15.598 { 00:19:15.598 "name": "ftl0", 00:19:15.598 "aliases": [ 00:19:15.598 "8de081b6-03aa-47ad-91df-ca5aa125d9cd" 00:19:15.598 ], 00:19:15.598 "product_name": "FTL disk", 00:19:15.598 "block_size": 4096, 00:19:15.598 "num_blocks": 23592960, 00:19:15.598 "uuid": "8de081b6-03aa-47ad-91df-ca5aa125d9cd", 00:19:15.598 "assigned_rate_limits": { 00:19:15.598 "rw_ios_per_sec": 0, 00:19:15.598 "rw_mbytes_per_sec": 0, 00:19:15.598 "r_mbytes_per_sec": 0, 00:19:15.598 "w_mbytes_per_sec": 0 00:19:15.598 }, 00:19:15.598 "claimed": false, 00:19:15.598 "zoned": false, 00:19:15.598 "supported_io_types": { 00:19:15.598 "read": true, 00:19:15.598 "write": true, 00:19:15.598 "unmap": true, 00:19:15.598 "flush": true, 00:19:15.598 "reset": false, 00:19:15.598 "nvme_admin": false, 00:19:15.598 "nvme_io": false, 00:19:15.598 "nvme_io_md": false, 00:19:15.598 "write_zeroes": true, 00:19:15.598 "zcopy": false, 00:19:15.598 "get_zone_info": false, 00:19:15.598 "zone_management": false, 00:19:15.598 "zone_append": false, 00:19:15.598 "compare": false, 00:19:15.598 "compare_and_write": false, 00:19:15.598 "abort": false, 00:19:15.598 "seek_hole": false, 00:19:15.598 "seek_data": false, 00:19:15.598 "copy": false, 00:19:15.598 "nvme_iov_md": false 00:19:15.598 }, 00:19:15.598 "driver_specific": { 00:19:15.598 "ftl": { 00:19:15.598 "base_bdev": "84340e5c-ede7-44d6-b236-0a1f4151719c", 00:19:15.598 "cache": "nvc0n1p0" 00:19:15.598 } 00:19:15.598 } 00:19:15.598 } 00:19:15.598 ]' 00:19:15.598 11:30:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:15.857 11:30:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:15.857 11:30:58 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:15.857 [2024-11-15 11:30:58.768048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.857 [2024-11-15 11:30:58.768135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.857 [2024-11-15 11:30:58.768158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:15.857 [2024-11-15 11:30:58.768175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.857 [2024-11-15 11:30:58.768219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:15.857 [2024-11-15 11:30:58.771760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.857 [2024-11-15 11:30:58.771791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.857 [2024-11-15 11:30:58.771811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.515 ms 00:19:15.857 [2024-11-15 11:30:58.771822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.857 [2024-11-15 11:30:58.772429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.857 [2024-11-15 11:30:58.772487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.857 [2024-11-15 11:30:58.772519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:15.857 [2024-11-15 11:30:58.772531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.857 [2024-11-15 11:30:58.775945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.857 [2024-11-15 11:30:58.775977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.857 [2024-11-15 11:30:58.776010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:19:15.857 [2024-11-15 11:30:58.776021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.857 [2024-11-15 11:30:58.782835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.857 [2024-11-15 11:30:58.782873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:15.857 [2024-11-15 11:30:58.782907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:19:15.857 [2024-11-15 11:30:58.782918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.811169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.811212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.118 [2024-11-15 11:30:58.811251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.168 ms 00:19:16.118 [2024-11-15 11:30:58.811262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.828782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.828825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.118 [2024-11-15 11:30:58.828861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.425 ms 00:19:16.118 [2024-11-15 11:30:58.828874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.829134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.829163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.118 [2024-11-15 11:30:58.829179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:19:16.118 [2024-11-15 11:30:58.829190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.856494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.856550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.118 [2024-11-15 11:30:58.856583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.261 ms 00:19:16.118 [2024-11-15 11:30:58.856594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.884055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.884091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.118 [2024-11-15 11:30:58.884128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.343 ms 00:19:16.118 [2024-11-15 11:30:58.884139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.910844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.910898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.118 [2024-11-15 11:30:58.910931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.609 ms 00:19:16.118 [2024-11-15 11:30:58.910941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.937535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.118 [2024-11-15 11:30:58.937589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.118 [2024-11-15 11:30:58.937623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.434 ms 00:19:16.118 [2024-11-15 11:30:58.937633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.118 [2024-11-15 11:30:58.937727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.118 [2024-11-15 11:30:58.937750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.937993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.118 [2024-11-15 11:30:58.938252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.938994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.119 [2024-11-15 11:30:58.939358] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.119 [2024-11-15 11:30:58.939375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:16.119 [2024-11-15 11:30:58.939388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.119 [2024-11-15 11:30:58.939417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.119 [2024-11-15 11:30:58.939428] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.119 [2024-11-15 11:30:58.939445] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.119 [2024-11-15 11:30:58.939455] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.119 [2024-11-15 11:30:58.939468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.119 [2024-11-15 11:30:58.939479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.119 [2024-11-15 11:30:58.939491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.119 [2024-11-15 11:30:58.939501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.119 [2024-11-15 11:30:58.939514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-15 11:30:58.939526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.119 [2024-11-15 11:30:58.939539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.791 ms 00:19:16.119 [2024-11-15 11:30:58.939550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-15 11:30:58.954755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-15 11:30:58.954809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:16.119 [2024-11-15 11:30:58.954845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.164 ms 00:19:16.119 [2024-11-15 11:30:58.954856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-15 11:30:58.955389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.119 [2024-11-15 11:30:58.955416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:16.119 [2024-11-15 11:30:58.955434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:19:16.119 [2024-11-15 11:30:58.955461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-15 11:30:59.006897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.119 [2024-11-15 11:30:59.006960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.119 [2024-11-15 11:30:59.006993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.119 [2024-11-15 11:30:59.007020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.119 [2024-11-15 11:30:59.007168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.120 [2024-11-15 11:30:59.007186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.120 [2024-11-15 11:30:59.007217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.120 [2024-11-15 11:30:59.007243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.120 [2024-11-15 11:30:59.007322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.120 [2024-11-15 11:30:59.007340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.120 [2024-11-15 11:30:59.007360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.120 [2024-11-15 11:30:59.007371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.120 [2024-11-15 11:30:59.007411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.120 [2024-11-15 11:30:59.007424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.120 [2024-11-15 11:30:59.007437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.120 [2024-11-15 11:30:59.007447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.101746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.101826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.379 [2024-11-15 11:30:59.101862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.101873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.174605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.174674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.379 [2024-11-15 11:30:59.174710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.174722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.174854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.174872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.379 [2024-11-15 11:30:59.174909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.174923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.175035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.379 [2024-11-15 11:30:59.175049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.175077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.175267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.379 [2024-11-15 11:30:59.175283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.175298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.175405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.379 [2024-11-15 11:30:59.175420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.175431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.175523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.379 [2024-11-15 11:30:59.175540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.175552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.379 [2024-11-15 11:30:59.175656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.379 [2024-11-15 11:30:59.175671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.379 [2024-11-15 11:30:59.175682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.379 [2024-11-15 11:30:59.175896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.836 ms, result 0 00:19:16.379 true 00:19:16.379 11:30:59 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75471 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75471 ']' 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75471 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75471 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:16.379 killing process with pid 75471 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75471' 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75471 00:19:16.379 11:30:59 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75471 00:19:21.648 11:31:03 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:21.908 65536+0 records in 00:19:21.908 65536+0 records out 00:19:21.908 268435456 bytes (268 MB, 256 MiB) copied, 0.964526 s, 278 MB/s 00:19:21.908 11:31:04 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:21.908 [2024-11-15 11:31:04.785883] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:21.908 [2024-11-15 11:31:04.786088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75670 ] 00:19:22.167 [2024-11-15 11:31:04.964593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.167 [2024-11-15 11:31:05.068276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.738 [2024-11-15 11:31:05.391485] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.738 [2024-11-15 11:31:05.391582] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.738 [2024-11-15 11:31:05.552565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.552613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.738 [2024-11-15 11:31:05.552647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:22.738 [2024-11-15 11:31:05.552658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.555916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.555954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.738 [2024-11-15 11:31:05.555984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:19:22.738 [2024-11-15 11:31:05.555995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.556135] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.738 [2024-11-15 11:31:05.556990] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.738 [2024-11-15 11:31:05.557098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.557130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.738 [2024-11-15 11:31:05.557143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:19:22.738 [2024-11-15 11:31:05.557154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.559390] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:22.738 [2024-11-15 11:31:05.573790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.573835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:22.738 [2024-11-15 11:31:05.573867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.402 ms 00:19:22.738 [2024-11-15 11:31:05.573877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.573983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.574003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:22.738 [2024-11-15 11:31:05.574015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:22.738 [2024-11-15 11:31:05.574025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.582573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.582627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.738 [2024-11-15 11:31:05.582657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.452 ms 00:19:22.738 [2024-11-15 11:31:05.582668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.582794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.582813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.738 [2024-11-15 11:31:05.582825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:22.738 [2024-11-15 11:31:05.582835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.582885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.582920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.738 [2024-11-15 11:31:05.582932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.738 [2024-11-15 11:31:05.582942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.582975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:22.738 [2024-11-15 11:31:05.587432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.587482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.738 [2024-11-15 11:31:05.587511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.465 ms 00:19:22.738 [2024-11-15 11:31:05.587521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.587594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.587612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.738 [2024-11-15 11:31:05.587623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:22.738 [2024-11-15 11:31:05.587663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.587692] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:22.738 [2024-11-15 11:31:05.587725] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:22.738 [2024-11-15 11:31:05.587763] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:22.738 [2024-11-15 11:31:05.587782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:22.738 [2024-11-15 11:31:05.587881] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.738 [2024-11-15 11:31:05.587896] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.738 [2024-11-15 11:31:05.587909] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.738 [2024-11-15 11:31:05.587922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.738 [2024-11-15 11:31:05.587939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.738 [2024-11-15 11:31:05.587950] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:22.738 [2024-11-15 11:31:05.587961] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.738 [2024-11-15 11:31:05.587970] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.738 [2024-11-15 11:31:05.587980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.738 [2024-11-15 11:31:05.587991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.588001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.738 [2024-11-15 11:31:05.588012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:22.738 [2024-11-15 11:31:05.588022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.588167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.738 [2024-11-15 11:31:05.588190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.738 [2024-11-15 11:31:05.588202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:22.738 [2024-11-15 11:31:05.588212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.738 [2024-11-15 11:31:05.588319] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.738 [2024-11-15 11:31:05.588336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.738 [2024-11-15 11:31:05.588348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.738 [2024-11-15 11:31:05.588359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.738 [2024-11-15 11:31:05.588369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.738 [2024-11-15 11:31:05.588379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.738 [2024-11-15 11:31:05.588389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:22.738 [2024-11-15 11:31:05.588399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.738 [2024-11-15 11:31:05.588408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:22.738 [2024-11-15 11:31:05.588418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.738 [2024-11-15 11:31:05.588427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.738 [2024-11-15 11:31:05.588436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:22.738 [2024-11-15 11:31:05.588445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.738 [2024-11-15 11:31:05.588467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.738 [2024-11-15 11:31:05.588477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:22.738 [2024-11-15 11:31:05.588487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.738 [2024-11-15 11:31:05.588496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.739 [2024-11-15 11:31:05.588506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.739 [2024-11-15 11:31:05.588534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.739 [2024-11-15 11:31:05.588561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.739 [2024-11-15 11:31:05.588589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.739 [2024-11-15 11:31:05.588616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.739 [2024-11-15 11:31:05.588643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.739 [2024-11-15 11:31:05.588662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.739 [2024-11-15 11:31:05.588671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:22.739 [2024-11-15 11:31:05.588680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.739 [2024-11-15 11:31:05.588690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.739 [2024-11-15 11:31:05.588699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:22.739 [2024-11-15 11:31:05.588710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.739 [2024-11-15 11:31:05.588729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:22.739 [2024-11-15 11:31:05.588738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.739 [2024-11-15 11:31:05.588757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.739 [2024-11-15 11:31:05.588767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.739 [2024-11-15 11:31:05.588807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.739 [2024-11-15 11:31:05.588818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.739 [2024-11-15 11:31:05.588827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.739 [2024-11-15 11:31:05.588852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.739 [2024-11-15 11:31:05.588861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.739 [2024-11-15 11:31:05.588871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.739 [2024-11-15 11:31:05.588882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.739 [2024-11-15 11:31:05.588894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.588906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:22.739 [2024-11-15 11:31:05.588916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:22.739 [2024-11-15 11:31:05.588926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:22.739 [2024-11-15 11:31:05.588937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:22.739 [2024-11-15 11:31:05.588946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:22.739 [2024-11-15 11:31:05.588956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:22.739 [2024-11-15 11:31:05.588967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:22.739 [2024-11-15 11:31:05.588976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:22.739 [2024-11-15 11:31:05.588986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:22.739 [2024-11-15 11:31:05.588995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:22.739 [2024-11-15 11:31:05.589045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.739 [2024-11-15 11:31:05.589093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.739 [2024-11-15 11:31:05.589122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.739 [2024-11-15 11:31:05.589133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.739 [2024-11-15 11:31:05.589144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.739 [2024-11-15 11:31:05.589156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.589167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.739 [2024-11-15 11:31:05.589183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:19:22.739 [2024-11-15 11:31:05.589193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.625046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.625159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.739 [2024-11-15 11:31:05.625196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.752 ms 00:19:22.739 [2024-11-15 11:31:05.625208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.625377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.625433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:22.739 [2024-11-15 11:31:05.625460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:22.739 [2024-11-15 11:31:05.625470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.678155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.678205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.739 [2024-11-15 11:31:05.678238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.641 ms 00:19:22.739 [2024-11-15 11:31:05.678254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.678424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.678443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.739 [2024-11-15 11:31:05.678455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.739 [2024-11-15 11:31:05.678481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.679156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.679183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.739 [2024-11-15 11:31:05.679197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:19:22.739 [2024-11-15 11:31:05.679213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.739 [2024-11-15 11:31:05.679423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.739 [2024-11-15 11:31:05.679451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.739 [2024-11-15 11:31:05.679464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:19:22.739 [2024-11-15 11:31:05.679479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.697092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.697153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.001 [2024-11-15 11:31:05.697184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.581 ms 00:19:23.001 [2024-11-15 11:31:05.697194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.711261] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:23.001 [2024-11-15 11:31:05.711306] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.001 [2024-11-15 11:31:05.711338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.711349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.001 [2024-11-15 11:31:05.711360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.003 ms 00:19:23.001 [2024-11-15 11:31:05.711378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.739617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.739666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.001 [2024-11-15 11:31:05.739711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.151 ms 00:19:23.001 [2024-11-15 11:31:05.739722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.753426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.753485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.001 [2024-11-15 11:31:05.753515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.611 ms 00:19:23.001 [2024-11-15 11:31:05.753535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.766396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.766452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.001 [2024-11-15 11:31:05.766466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:19:23.001 [2024-11-15 11:31:05.766476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.767336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.767400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.001 [2024-11-15 11:31:05.767415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:19:23.001 [2024-11-15 11:31:05.767426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.835192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.835268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.001 [2024-11-15 11:31:05.835302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.702 ms 00:19:23.001 [2024-11-15 11:31:05.835314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.845538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:23.001 [2024-11-15 11:31:05.864020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.864081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.001 [2024-11-15 11:31:05.864118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.552 ms 00:19:23.001 [2024-11-15 11:31:05.864135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.864262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.864285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.001 [2024-11-15 11:31:05.864298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:23.001 [2024-11-15 11:31:05.864308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.864406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.864424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.001 [2024-11-15 11:31:05.864436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:23.001 [2024-11-15 11:31:05.864446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.864505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.864524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.001 [2024-11-15 11:31:05.864540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:23.001 [2024-11-15 11:31:05.864550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.864595] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.001 [2024-11-15 11:31:05.864624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.864636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.001 [2024-11-15 11:31:05.864648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:23.001 [2024-11-15 11:31:05.864658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.890425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.890489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.001 [2024-11-15 11:31:05.890520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:19:23.001 [2024-11-15 11:31:05.890531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.890659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.001 [2024-11-15 11:31:05.890703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.001 [2024-11-15 11:31:05.890731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:23.001 [2024-11-15 11:31:05.890756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.001 [2024-11-15 11:31:05.892201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.001 [2024-11-15 11:31:05.895687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.174 ms, result 0 00:19:23.001 [2024-11-15 11:31:05.896528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.001 [2024-11-15 11:31:05.911006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.377  [2024-11-15T11:31:08.262Z] Copying: 21/256 [MB] (21 MBps) [2024-11-15T11:31:09.198Z] Copying: 43/256 [MB] (21 MBps) [2024-11-15T11:31:10.135Z] Copying: 65/256 [MB] (21 MBps) [2024-11-15T11:31:11.071Z] Copying: 86/256 [MB] (21 MBps) [2024-11-15T11:31:12.007Z] Copying: 108/256 [MB] (21 MBps) [2024-11-15T11:31:12.943Z] Copying: 129/256 [MB] (21 MBps) [2024-11-15T11:31:14.318Z] Copying: 152/256 [MB] (22 MBps) [2024-11-15T11:31:15.252Z] Copying: 175/256 [MB] (22 MBps) [2024-11-15T11:31:16.189Z] Copying: 197/256 [MB] (22 MBps) [2024-11-15T11:31:17.124Z] Copying: 220/256 [MB] (23 MBps) [2024-11-15T11:31:17.693Z] Copying: 243/256 [MB] (23 MBps) [2024-11-15T11:31:17.693Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-15 11:31:17.451461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.744 [2024-11-15 11:31:17.464447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.744 [2024-11-15 11:31:17.464517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.744 [2024-11-15 11:31:17.464536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:34.744 [2024-11-15 11:31:17.464548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.744 [2024-11-15 11:31:17.464590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.744 [2024-11-15 11:31:17.468093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.744 [2024-11-15 11:31:17.468147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.744 [2024-11-15 11:31:17.468162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:19:34.744 [2024-11-15 11:31:17.468174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.744 [2024-11-15 11:31:17.470135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.744 [2024-11-15 11:31:17.470194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.744 [2024-11-15 11:31:17.470211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.930 ms 00:19:34.744 [2024-11-15 11:31:17.470222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.744 [2024-11-15 11:31:17.477706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.744 [2024-11-15 11:31:17.477767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.744 [2024-11-15 11:31:17.477792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.460 ms 00:19:34.744 [2024-11-15 11:31:17.477804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.744 [2024-11-15 11:31:17.484934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.744 [2024-11-15 11:31:17.484989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.745 [2024-11-15 11:31:17.485036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.065 ms 00:19:34.745 [2024-11-15 11:31:17.485048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.514937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.515000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.745 [2024-11-15 11:31:17.515017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.786 ms 00:19:34.745 [2024-11-15 11:31:17.515044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.531867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.531930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.745 [2024-11-15 11:31:17.531953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.756 ms 00:19:34.745 [2024-11-15 11:31:17.531970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.532144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.532165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.745 [2024-11-15 11:31:17.532194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:34.745 [2024-11-15 11:31:17.532206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.560944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.561020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.745 [2024-11-15 11:31:17.561047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.699 ms 00:19:34.745 [2024-11-15 11:31:17.561060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.589428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.589490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.745 [2024-11-15 11:31:17.589507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.279 ms 00:19:34.745 [2024-11-15 11:31:17.589518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.618609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.618684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.745 [2024-11-15 11:31:17.618702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.027 ms 00:19:34.745 [2024-11-15 11:31:17.618713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.647098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.745 [2024-11-15 11:31:17.647170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.745 [2024-11-15 11:31:17.647188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.276 ms 00:19:34.745 [2024-11-15 11:31:17.647199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.745 [2024-11-15 11:31:17.647267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.745 [2024-11-15 11:31:17.647300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.745 [2024-11-15 11:31:17.647953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.647965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.647977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.647988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.746 [2024-11-15 11:31:17.648551] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.746 [2024-11-15 11:31:17.648564] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:34.746 [2024-11-15 11:31:17.648576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.746 [2024-11-15 11:31:17.648588] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.746 [2024-11-15 11:31:17.648599] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.746 [2024-11-15 11:31:17.648610] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.746 [2024-11-15 11:31:17.648622] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.746 [2024-11-15 11:31:17.648633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.746 [2024-11-15 11:31:17.648644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.746 [2024-11-15 11:31:17.648655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.746 [2024-11-15 11:31:17.648665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.746 [2024-11-15 11:31:17.648677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.746 [2024-11-15 11:31:17.648689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.746 [2024-11-15 11:31:17.648709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:19:34.746 [2024-11-15 11:31:17.648720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.746 [2024-11-15 11:31:17.664748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.746 [2024-11-15 11:31:17.664813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.746 [2024-11-15 11:31:17.664831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.999 ms 00:19:34.746 [2024-11-15 11:31:17.664843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.746 [2024-11-15 11:31:17.665386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.746 [2024-11-15 11:31:17.665425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.746 [2024-11-15 11:31:17.665440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:19:34.746 [2024-11-15 11:31:17.665451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.023 [2024-11-15 11:31:17.710236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.023 [2024-11-15 11:31:17.710324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.023 [2024-11-15 11:31:17.710343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.023 [2024-11-15 11:31:17.710355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.023 [2024-11-15 11:31:17.710510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.023 [2024-11-15 11:31:17.710532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.023 [2024-11-15 11:31:17.710545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.023 [2024-11-15 11:31:17.710555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.023 [2024-11-15 11:31:17.710654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.023 [2024-11-15 11:31:17.710674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.023 [2024-11-15 11:31:17.710688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.023 [2024-11-15 11:31:17.710699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.710726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.710741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.024 [2024-11-15 11:31:17.710760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.710772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.811327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.811419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.024 [2024-11-15 11:31:17.811438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.811450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.890786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.890888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.024 [2024-11-15 11:31:17.890907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.890918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.024 [2024-11-15 11:31:17.891077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.024 [2024-11-15 11:31:17.891155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.024 [2024-11-15 11:31:17.891357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.024 [2024-11-15 11:31:17.891453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.024 [2024-11-15 11:31:17.891551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.024 [2024-11-15 11:31:17.891637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.024 [2024-11-15 11:31:17.891650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.024 [2024-11-15 11:31:17.891668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.024 [2024-11-15 11:31:17.891845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 427.387 ms, result 0 00:19:36.398 00:19:36.398 00:19:36.398 11:31:18 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75817 00:19:36.398 11:31:18 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:36.398 11:31:18 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75817 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75817 ']' 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:36.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:36.398 11:31:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:36.398 [2024-11-15 11:31:19.116531] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:36.398 [2024-11-15 11:31:19.116713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75817 ] 00:19:36.398 [2024-11-15 11:31:19.298415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.656 [2024-11-15 11:31:19.407369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.589 11:31:20 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:37.589 11:31:20 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:37.589 11:31:20 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:37.589 [2024-11-15 11:31:20.484758] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.589 [2024-11-15 11:31:20.484890] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.848 [2024-11-15 11:31:20.668987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.669111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.848 [2024-11-15 11:31:20.669159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:37.848 [2024-11-15 11:31:20.669173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.673278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.673339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.848 [2024-11-15 11:31:20.673374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.076 ms 00:19:37.848 [2024-11-15 11:31:20.673401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.673564] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.848 [2024-11-15 11:31:20.674528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.848 [2024-11-15 11:31:20.674584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.674615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.848 [2024-11-15 11:31:20.674629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:19:37.848 [2024-11-15 11:31:20.674655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.676783] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:37.848 [2024-11-15 11:31:20.691893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.691979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:37.848 [2024-11-15 11:31:20.691997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.115 ms 00:19:37.848 [2024-11-15 11:31:20.692015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.692152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.692212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:37.848 [2024-11-15 11:31:20.692244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:37.848 [2024-11-15 11:31:20.692278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.700941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.701041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.848 [2024-11-15 11:31:20.701059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.588 ms 00:19:37.848 [2024-11-15 11:31:20.701101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.701297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.701352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.848 [2024-11-15 11:31:20.701368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:37.848 [2024-11-15 11:31:20.701387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.701455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.701477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.848 [2024-11-15 11:31:20.701490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.848 [2024-11-15 11:31:20.701507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.701554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:37.848 [2024-11-15 11:31:20.706194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.706247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.848 [2024-11-15 11:31:20.706285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:19:37.848 [2024-11-15 11:31:20.706298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.706395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.706419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.848 [2024-11-15 11:31:20.706438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:37.848 [2024-11-15 11:31:20.706470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.706538] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.848 [2024-11-15 11:31:20.706568] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:37.848 [2024-11-15 11:31:20.706627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.848 [2024-11-15 11:31:20.706652] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:37.848 [2024-11-15 11:31:20.706769] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.848 [2024-11-15 11:31:20.706798] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.848 [2024-11-15 11:31:20.706832] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:37.848 [2024-11-15 11:31:20.706848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.848 [2024-11-15 11:31:20.706867] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.848 [2024-11-15 11:31:20.706881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:37.848 [2024-11-15 11:31:20.706898] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.848 [2024-11-15 11:31:20.706909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.848 [2024-11-15 11:31:20.706930] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.848 [2024-11-15 11:31:20.706959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.706992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.848 [2024-11-15 11:31:20.707005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:19:37.848 [2024-11-15 11:31:20.707021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.707146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.848 [2024-11-15 11:31:20.707170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.848 [2024-11-15 11:31:20.707184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:37.848 [2024-11-15 11:31:20.707200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.848 [2024-11-15 11:31:20.707326] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.848 [2024-11-15 11:31:20.707354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.848 [2024-11-15 11:31:20.707368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.848 [2024-11-15 11:31:20.707385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.848 [2024-11-15 11:31:20.707414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:37.848 [2024-11-15 11:31:20.707448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.848 [2024-11-15 11:31:20.707460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.848 [2024-11-15 11:31:20.707491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.848 [2024-11-15 11:31:20.707507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:37.848 [2024-11-15 11:31:20.707517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.848 [2024-11-15 11:31:20.707533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.848 [2024-11-15 11:31:20.707544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:37.848 [2024-11-15 11:31:20.707559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.848 [2024-11-15 11:31:20.707586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:37.848 [2024-11-15 11:31:20.707598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.848 [2024-11-15 11:31:20.707640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.848 [2024-11-15 11:31:20.707669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.848 [2024-11-15 11:31:20.707690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:37.848 [2024-11-15 11:31:20.707701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.848 [2024-11-15 11:31:20.707717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.848 [2024-11-15 11:31:20.707728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:37.849 [2024-11-15 11:31:20.707743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.849 [2024-11-15 11:31:20.707754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.849 [2024-11-15 11:31:20.707770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:37.849 [2024-11-15 11:31:20.707781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.849 [2024-11-15 11:31:20.707798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.849 [2024-11-15 11:31:20.707809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:37.849 [2024-11-15 11:31:20.707824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.849 [2024-11-15 11:31:20.707835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.849 [2024-11-15 11:31:20.707851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:37.849 [2024-11-15 11:31:20.707862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.849 [2024-11-15 11:31:20.707879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.849 [2024-11-15 11:31:20.707891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:37.849 [2024-11-15 11:31:20.707911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.849 [2024-11-15 11:31:20.707923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.849 [2024-11-15 11:31:20.707938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:37.849 [2024-11-15 11:31:20.707950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.849 [2024-11-15 11:31:20.707965] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.849 [2024-11-15 11:31:20.707983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.849 [2024-11-15 11:31:20.707999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.849 [2024-11-15 11:31:20.708012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.849 [2024-11-15 11:31:20.708042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.849 [2024-11-15 11:31:20.708073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.849 [2024-11-15 11:31:20.708093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.849 [2024-11-15 11:31:20.708106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.849 [2024-11-15 11:31:20.708122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.849 [2024-11-15 11:31:20.708134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.849 [2024-11-15 11:31:20.708152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.849 [2024-11-15 11:31:20.708169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:37.849 [2024-11-15 11:31:20.708208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:37.849 [2024-11-15 11:31:20.708224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:37.849 [2024-11-15 11:31:20.708236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:37.849 [2024-11-15 11:31:20.708253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:37.849 [2024-11-15 11:31:20.708266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:37.849 [2024-11-15 11:31:20.708283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:37.849 [2024-11-15 11:31:20.708295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:37.849 [2024-11-15 11:31:20.708312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:37.849 [2024-11-15 11:31:20.708325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:37.849 [2024-11-15 11:31:20.708401] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.849 [2024-11-15 11:31:20.708431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.849 [2024-11-15 11:31:20.708466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.849 [2024-11-15 11:31:20.708484] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.849 [2024-11-15 11:31:20.708497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.849 [2024-11-15 11:31:20.708515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.708529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.849 [2024-11-15 11:31:20.708546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:19:37.849 [2024-11-15 11:31:20.708559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.746338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.746415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.849 [2024-11-15 11:31:20.746473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.681 ms 00:19:37.849 [2024-11-15 11:31:20.746500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.746677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.746695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.849 [2024-11-15 11:31:20.746727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:37.849 [2024-11-15 11:31:20.746738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.788450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.788526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.849 [2024-11-15 11:31:20.788564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.642 ms 00:19:37.849 [2024-11-15 11:31:20.788576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.788699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.788716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.849 [2024-11-15 11:31:20.788731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.849 [2024-11-15 11:31:20.788742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.789537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.789584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.849 [2024-11-15 11:31:20.789621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:19:37.849 [2024-11-15 11:31:20.789633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.849 [2024-11-15 11:31:20.789831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.849 [2024-11-15 11:31:20.789848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.849 [2024-11-15 11:31:20.789863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:37.849 [2024-11-15 11:31:20.789874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.107 [2024-11-15 11:31:20.811099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.107 [2024-11-15 11:31:20.811163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.107 [2024-11-15 11:31:20.811203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.186 ms 00:19:38.107 [2024-11-15 11:31:20.811216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.107 [2024-11-15 11:31:20.839888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:38.108 [2024-11-15 11:31:20.839948] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:38.108 [2024-11-15 11:31:20.839990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.840004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:38.108 [2024-11-15 11:31:20.840022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.634 ms 00:19:38.108 [2024-11-15 11:31:20.840047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.865086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.865144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:38.108 [2024-11-15 11:31:20.865185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.932 ms 00:19:38.108 [2024-11-15 11:31:20.865197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.879359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.879437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:38.108 [2024-11-15 11:31:20.879491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.053 ms 00:19:38.108 [2024-11-15 11:31:20.879504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.894540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.894599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:38.108 [2024-11-15 11:31:20.894640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.935 ms 00:19:38.108 [2024-11-15 11:31:20.894653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.895648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.895697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.108 [2024-11-15 11:31:20.895736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:19:38.108 [2024-11-15 11:31:20.895749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.968112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.968195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:38.108 [2024-11-15 11:31:20.968235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.324 ms 00:19:38.108 [2024-11-15 11:31:20.968248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.979478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:38.108 [2024-11-15 11:31:20.998861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.998964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.108 [2024-11-15 11:31:20.998988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.478 ms 00:19:38.108 [2024-11-15 11:31:20.999003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.999173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.999195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:38.108 [2024-11-15 11:31:20.999224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.108 [2024-11-15 11:31:20.999270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.999346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.999382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.108 [2024-11-15 11:31:20.999397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:38.108 [2024-11-15 11:31:20.999415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.999450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.999478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.108 [2024-11-15 11:31:20.999492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:38.108 [2024-11-15 11:31:20.999509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:20.999553] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:38.108 [2024-11-15 11:31:20.999574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:20.999586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:38.108 [2024-11-15 11:31:20.999605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:38.108 [2024-11-15 11:31:20.999616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:21.028737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:21.028798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.108 [2024-11-15 11:31:21.028834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.080 ms 00:19:38.108 [2024-11-15 11:31:21.028846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:21.028983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.108 [2024-11-15 11:31:21.029002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.108 [2024-11-15 11:31:21.029034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:38.108 [2024-11-15 11:31:21.029136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.108 [2024-11-15 11:31:21.030363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.108 [2024-11-15 11:31:21.034258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.955 ms, result 0 00:19:38.108 [2024-11-15 11:31:21.035573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.366 Some configs were skipped because the RPC state that can call them passed over. 00:19:38.366 11:31:21 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:38.625 [2024-11-15 11:31:21.409012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.625 [2024-11-15 11:31:21.409146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:38.625 [2024-11-15 11:31:21.409172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:19:38.625 [2024-11-15 11:31:21.409188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.625 [2024-11-15 11:31:21.409256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.952 ms, result 0 00:19:38.625 true 00:19:38.625 11:31:21 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:38.884 [2024-11-15 11:31:21.697037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.884 [2024-11-15 11:31:21.697196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:38.884 [2024-11-15 11:31:21.697238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.367 ms 00:19:38.884 [2024-11-15 11:31:21.697281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.884 [2024-11-15 11:31:21.697334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.671 ms, result 0 00:19:38.884 true 00:19:38.884 11:31:21 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75817 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75817 ']' 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75817 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75817 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75817' 00:19:38.884 killing process with pid 75817 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75817 00:19:38.884 11:31:21 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75817 00:19:39.820 [2024-11-15 11:31:22.677598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.820 [2024-11-15 11:31:22.677696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.820 [2024-11-15 11:31:22.677717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.820 [2024-11-15 11:31:22.677730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.820 [2024-11-15 11:31:22.677764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:39.821 [2024-11-15 11:31:22.681304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.681338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.821 [2024-11-15 11:31:22.681375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.516 ms 00:19:39.821 [2024-11-15 11:31:22.681386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.681716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.681740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.821 [2024-11-15 11:31:22.681756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:39.821 [2024-11-15 11:31:22.681766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.685709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.685754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.821 [2024-11-15 11:31:22.685776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:19:39.821 [2024-11-15 11:31:22.685788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.692695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.692731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:39.821 [2024-11-15 11:31:22.692764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.858 ms 00:19:39.821 [2024-11-15 11:31:22.692776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.704288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.704328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.821 [2024-11-15 11:31:22.704365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.431 ms 00:19:39.821 [2024-11-15 11:31:22.704387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.713752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.713972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.821 [2024-11-15 11:31:22.714006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.299 ms 00:19:39.821 [2024-11-15 11:31:22.714020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.714246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.714267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.821 [2024-11-15 11:31:22.714283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:19:39.821 [2024-11-15 11:31:22.714295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.726600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.726641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:39.821 [2024-11-15 11:31:22.726675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:19:39.821 [2024-11-15 11:31:22.726686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.738523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.738562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:39.821 [2024-11-15 11:31:22.738599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.789 ms 00:19:39.821 [2024-11-15 11:31:22.738610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.750108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.750315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.821 [2024-11-15 11:31:22.750351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.451 ms 00:19:39.821 [2024-11-15 11:31:22.750364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.761667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.821 [2024-11-15 11:31:22.761880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:39.821 [2024-11-15 11:31:22.761913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.193 ms 00:19:39.821 [2024-11-15 11:31:22.761925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.821 [2024-11-15 11:31:22.761977] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:39.821 [2024-11-15 11:31:22.761999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:39.821 [2024-11-15 11:31:22.762780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.762990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:39.822 [2024-11-15 11:31:22.763456] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:39.822 [2024-11-15 11:31:22.763476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:39.822 [2024-11-15 11:31:22.763499] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:39.822 [2024-11-15 11:31:22.763517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:39.822 [2024-11-15 11:31:22.763544] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:39.822 [2024-11-15 11:31:22.763557] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:39.822 [2024-11-15 11:31:22.763568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:39.822 [2024-11-15 11:31:22.763582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:39.822 [2024-11-15 11:31:22.763593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:39.822 [2024-11-15 11:31:22.763605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:39.822 [2024-11-15 11:31:22.763615] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:39.822 [2024-11-15 11:31:22.763629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.822 [2024-11-15 11:31:22.763653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:39.822 [2024-11-15 11:31:22.763668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:19:39.822 [2024-11-15 11:31:22.763679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.779204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.081 [2024-11-15 11:31:22.779401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.081 [2024-11-15 11:31:22.779446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.457 ms 00:19:40.081 [2024-11-15 11:31:22.779459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.780016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.081 [2024-11-15 11:31:22.780081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.081 [2024-11-15 11:31:22.780147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:19:40.081 [2024-11-15 11:31:22.780162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.832679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:22.832730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.081 [2024-11-15 11:31:22.832766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:22.832778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.832887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:22.832904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.081 [2024-11-15 11:31:22.832918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:22.832932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.832995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:22.833013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.081 [2024-11-15 11:31:22.833031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:22.833057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.833148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:22.833161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.081 [2024-11-15 11:31:22.833190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:22.833202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:22.929591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:22.929941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.081 [2024-11-15 11:31:22.929978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:22.929992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.006396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.006468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.081 [2024-11-15 11:31:23.006507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.006522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.006631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.006648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.081 [2024-11-15 11:31:23.006666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.006676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.006715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.006728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.081 [2024-11-15 11:31:23.006742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.006752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.006882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.006900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.081 [2024-11-15 11:31:23.006915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.006926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.006980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.006998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.081 [2024-11-15 11:31:23.007012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.007022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.007136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.007168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.081 [2024-11-15 11:31:23.007187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.007214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.007274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.081 [2024-11-15 11:31:23.007291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.081 [2024-11-15 11:31:23.007306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.081 [2024-11-15 11:31:23.007317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.081 [2024-11-15 11:31:23.007507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.864 ms, result 0 00:19:41.016 11:31:23 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:41.016 11:31:23 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.275 [2024-11-15 11:31:24.028108] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:41.275 [2024-11-15 11:31:24.028309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75881 ] 00:19:41.275 [2024-11-15 11:31:24.214804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.533 [2024-11-15 11:31:24.342137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.791 [2024-11-15 11:31:24.679476] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:41.791 [2024-11-15 11:31:24.679575] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.049 [2024-11-15 11:31:24.841204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.049 [2024-11-15 11:31:24.841281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.049 [2024-11-15 11:31:24.841318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.049 [2024-11-15 11:31:24.841330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.049 [2024-11-15 11:31:24.844479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.049 [2024-11-15 11:31:24.844519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.049 [2024-11-15 11:31:24.844551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:19:42.049 [2024-11-15 11:31:24.844562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.049 [2024-11-15 11:31:24.844690] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.049 [2024-11-15 11:31:24.845740] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.050 [2024-11-15 11:31:24.845784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.845800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.050 [2024-11-15 11:31:24.845813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:19:42.050 [2024-11-15 11:31:24.845838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.847948] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:42.050 [2024-11-15 11:31:24.862028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.862273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:42.050 [2024-11-15 11:31:24.862303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:19:42.050 [2024-11-15 11:31:24.862317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.862440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.862462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:42.050 [2024-11-15 11:31:24.862490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:42.050 [2024-11-15 11:31:24.862502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.871045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.871083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.050 [2024-11-15 11:31:24.871114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.490 ms 00:19:42.050 [2024-11-15 11:31:24.871125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.871241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.871261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.050 [2024-11-15 11:31:24.871274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:42.050 [2024-11-15 11:31:24.871285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.871322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.871341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:42.050 [2024-11-15 11:31:24.871352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:42.050 [2024-11-15 11:31:24.871363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.871392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:42.050 [2024-11-15 11:31:24.875733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.875768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.050 [2024-11-15 11:31:24.875798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:19:42.050 [2024-11-15 11:31:24.875809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.875885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.875903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:42.050 [2024-11-15 11:31:24.875915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:42.050 [2024-11-15 11:31:24.875926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.875955] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:42.050 [2024-11-15 11:31:24.875986] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:42.050 [2024-11-15 11:31:24.876023] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:42.050 [2024-11-15 11:31:24.876076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:42.050 [2024-11-15 11:31:24.876196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:42.050 [2024-11-15 11:31:24.876211] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:42.050 [2024-11-15 11:31:24.876226] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:42.050 [2024-11-15 11:31:24.876241] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876260] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876272] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:42.050 [2024-11-15 11:31:24.876283] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:42.050 [2024-11-15 11:31:24.876294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:42.050 [2024-11-15 11:31:24.876304] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:42.050 [2024-11-15 11:31:24.876316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.876327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:42.050 [2024-11-15 11:31:24.876338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:19:42.050 [2024-11-15 11:31:24.876365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.876489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.050 [2024-11-15 11:31:24.876509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:42.050 [2024-11-15 11:31:24.876521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:42.050 [2024-11-15 11:31:24.876531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.050 [2024-11-15 11:31:24.876636] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:42.050 [2024-11-15 11:31:24.876659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:42.050 [2024-11-15 11:31:24.876671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:42.050 [2024-11-15 11:31:24.876705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:42.050 [2024-11-15 11:31:24.876736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.050 [2024-11-15 11:31:24.876757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:42.050 [2024-11-15 11:31:24.876766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:42.050 [2024-11-15 11:31:24.876776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.050 [2024-11-15 11:31:24.876802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:42.050 [2024-11-15 11:31:24.876813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:42.050 [2024-11-15 11:31:24.876824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:42.050 [2024-11-15 11:31:24.876847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:42.050 [2024-11-15 11:31:24.876878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:42.050 [2024-11-15 11:31:24.876909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:42.050 [2024-11-15 11:31:24.876953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.050 [2024-11-15 11:31:24.876973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:42.050 [2024-11-15 11:31:24.876983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:42.050 [2024-11-15 11:31:24.876993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:42.050 [2024-11-15 11:31:24.877004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:42.050 [2024-11-15 11:31:24.877014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:42.050 [2024-11-15 11:31:24.877024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.051 [2024-11-15 11:31:24.877034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:42.051 [2024-11-15 11:31:24.877044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:42.051 [2024-11-15 11:31:24.877054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.051 [2024-11-15 11:31:24.877066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:42.051 [2024-11-15 11:31:24.877317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:42.051 [2024-11-15 11:31:24.877358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.051 [2024-11-15 11:31:24.877394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:42.051 [2024-11-15 11:31:24.877429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:42.051 [2024-11-15 11:31:24.877584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.051 [2024-11-15 11:31:24.877632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:42.051 [2024-11-15 11:31:24.877668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:42.051 [2024-11-15 11:31:24.877704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.051 [2024-11-15 11:31:24.877811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.051 [2024-11-15 11:31:24.877830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:42.051 [2024-11-15 11:31:24.877842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:42.051 [2024-11-15 11:31:24.877853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:42.051 [2024-11-15 11:31:24.877864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:42.051 [2024-11-15 11:31:24.877874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:42.051 [2024-11-15 11:31:24.877884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:42.051 [2024-11-15 11:31:24.877897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:42.051 [2024-11-15 11:31:24.877911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.877924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:42.051 [2024-11-15 11:31:24.877935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:42.051 [2024-11-15 11:31:24.877946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:42.051 [2024-11-15 11:31:24.877957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:42.051 [2024-11-15 11:31:24.877968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:42.051 [2024-11-15 11:31:24.877979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:42.051 [2024-11-15 11:31:24.877990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:42.051 [2024-11-15 11:31:24.878001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:42.051 [2024-11-15 11:31:24.878011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:42.051 [2024-11-15 11:31:24.878022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:42.051 [2024-11-15 11:31:24.878094] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:42.051 [2024-11-15 11:31:24.878108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:42.051 [2024-11-15 11:31:24.878131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:42.051 [2024-11-15 11:31:24.878141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:42.051 [2024-11-15 11:31:24.878153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:42.051 [2024-11-15 11:31:24.878166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.878178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:42.051 [2024-11-15 11:31:24.878198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.592 ms 00:19:42.051 [2024-11-15 11:31:24.878209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.913250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.913504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.051 [2024-11-15 11:31:24.913674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.961 ms 00:19:42.051 [2024-11-15 11:31:24.913723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.913934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.913994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.051 [2024-11-15 11:31:24.914148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:42.051 [2024-11-15 11:31:24.914276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.960494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.960744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.051 [2024-11-15 11:31:24.960858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.141 ms 00:19:42.051 [2024-11-15 11:31:24.960914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.961130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.961190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.051 [2024-11-15 11:31:24.961230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:42.051 [2024-11-15 11:31:24.961329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.961955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.962143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.051 [2024-11-15 11:31:24.962251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:19:42.051 [2024-11-15 11:31:24.962305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.962669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.962784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.051 [2024-11-15 11:31:24.962884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:42.051 [2024-11-15 11:31:24.962994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.980627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.980823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.051 [2024-11-15 11:31:24.980931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.531 ms 00:19:42.051 [2024-11-15 11:31:24.980977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.051 [2024-11-15 11:31:24.995279] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:42.051 [2024-11-15 11:31:24.995544] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:42.051 [2024-11-15 11:31:24.995678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.051 [2024-11-15 11:31:24.995721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:42.051 [2024-11-15 11:31:24.995822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.457 ms 00:19:42.051 [2024-11-15 11:31:24.995867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.020067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.020267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:42.310 [2024-11-15 11:31:25.020382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.083 ms 00:19:42.310 [2024-11-15 11:31:25.020428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.033231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.033444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:42.310 [2024-11-15 11:31:25.033561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.665 ms 00:19:42.310 [2024-11-15 11:31:25.033606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.046500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.046689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:42.310 [2024-11-15 11:31:25.046791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.766 ms 00:19:42.310 [2024-11-15 11:31:25.046812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.047700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.047737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.310 [2024-11-15 11:31:25.047768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.724 ms 00:19:42.310 [2024-11-15 11:31:25.047779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.113796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.113872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:42.310 [2024-11-15 11:31:25.113907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.982 ms 00:19:42.310 [2024-11-15 11:31:25.113919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.124175] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:42.310 [2024-11-15 11:31:25.142116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.142177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.310 [2024-11-15 11:31:25.142212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.000 ms 00:19:42.310 [2024-11-15 11:31:25.142231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.142360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.142381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:42.310 [2024-11-15 11:31:25.142393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:42.310 [2024-11-15 11:31:25.142404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.142475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.142491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.310 [2024-11-15 11:31:25.142503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:42.310 [2024-11-15 11:31:25.142513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.142562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.142581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.310 [2024-11-15 11:31:25.142592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:42.310 [2024-11-15 11:31:25.142603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.142641] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:42.310 [2024-11-15 11:31:25.142657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.142668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:42.310 [2024-11-15 11:31:25.142679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:42.310 [2024-11-15 11:31:25.142689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.169404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.169622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.310 [2024-11-15 11:31:25.169651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.684 ms 00:19:42.310 [2024-11-15 11:31:25.169664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.169805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.310 [2024-11-15 11:31:25.169825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.310 [2024-11-15 11:31:25.169839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:42.310 [2024-11-15 11:31:25.169850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.310 [2024-11-15 11:31:25.171310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:42.310 [2024-11-15 11:31:25.174830] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.668 ms, result 0 00:19:42.310 [2024-11-15 11:31:25.175754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:42.310 [2024-11-15 11:31:25.189831] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.245  [2024-11-15T11:31:27.572Z] Copying: 24/256 [MB] (24 MBps) [2024-11-15T11:31:28.508Z] Copying: 44/256 [MB] (20 MBps) [2024-11-15T11:31:29.445Z] Copying: 65/256 [MB] (20 MBps) [2024-11-15T11:31:30.381Z] Copying: 85/256 [MB] (20 MBps) [2024-11-15T11:31:31.318Z] Copying: 106/256 [MB] (20 MBps) [2024-11-15T11:31:32.284Z] Copying: 126/256 [MB] (20 MBps) [2024-11-15T11:31:33.251Z] Copying: 146/256 [MB] (20 MBps) [2024-11-15T11:31:34.630Z] Copying: 166/256 [MB] (19 MBps) [2024-11-15T11:31:35.198Z] Copying: 185/256 [MB] (19 MBps) [2024-11-15T11:31:36.576Z] Copying: 206/256 [MB] (20 MBps) [2024-11-15T11:31:37.513Z] Copying: 226/256 [MB] (20 MBps) [2024-11-15T11:31:37.772Z] Copying: 246/256 [MB] (20 MBps) [2024-11-15T11:31:37.772Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-15 11:31:37.653263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.823 [2024-11-15 11:31:37.665649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.823 [2024-11-15 11:31:37.665694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.823 [2024-11-15 11:31:37.665757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:54.823 [2024-11-15 11:31:37.665777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.823 [2024-11-15 11:31:37.665806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:54.823 [2024-11-15 11:31:37.669373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.823 [2024-11-15 11:31:37.669436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.824 [2024-11-15 11:31:37.669467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:19:54.824 [2024-11-15 11:31:37.669490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.669787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.669819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.824 [2024-11-15 11:31:37.669831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:54.824 [2024-11-15 11:31:37.669842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.673452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.673495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.824 [2024-11-15 11:31:37.673526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:19:54.824 [2024-11-15 11:31:37.673537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.680357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.680389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.824 [2024-11-15 11:31:37.680418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.772 ms 00:19:54.824 [2024-11-15 11:31:37.680445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.708884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.708928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.824 [2024-11-15 11:31:37.708960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.363 ms 00:19:54.824 [2024-11-15 11:31:37.708971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.726593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.726642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.824 [2024-11-15 11:31:37.726679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.577 ms 00:19:54.824 [2024-11-15 11:31:37.726691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.726855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.726875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.824 [2024-11-15 11:31:37.726888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:54.824 [2024-11-15 11:31:37.726899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.824 [2024-11-15 11:31:37.755953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.824 [2024-11-15 11:31:37.755995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.824 [2024-11-15 11:31:37.756026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.017 ms 00:19:54.824 [2024-11-15 11:31:37.756078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.084 [2024-11-15 11:31:37.784013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.084 [2024-11-15 11:31:37.784078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:55.084 [2024-11-15 11:31:37.784127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.890 ms 00:19:55.084 [2024-11-15 11:31:37.784137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.084 [2024-11-15 11:31:37.812036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.084 [2024-11-15 11:31:37.812119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:55.084 [2024-11-15 11:31:37.812137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.830 ms 00:19:55.084 [2024-11-15 11:31:37.812148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.084 [2024-11-15 11:31:37.839790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.084 [2024-11-15 11:31:37.839843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:55.084 [2024-11-15 11:31:37.839858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.547 ms 00:19:55.084 [2024-11-15 11:31:37.839868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.084 [2024-11-15 11:31:37.839908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:55.084 [2024-11-15 11:31:37.839927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.839993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:55.084 [2024-11-15 11:31:37.840349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.840994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:55.085 [2024-11-15 11:31:37.841264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:55.085 [2024-11-15 11:31:37.841276] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:55.085 [2024-11-15 11:31:37.841288] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:55.085 [2024-11-15 11:31:37.841299] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:55.085 [2024-11-15 11:31:37.841309] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:55.085 [2024-11-15 11:31:37.841321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:55.085 [2024-11-15 11:31:37.841332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:55.085 [2024-11-15 11:31:37.841344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:55.085 [2024-11-15 11:31:37.841355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:55.085 [2024-11-15 11:31:37.841365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:55.085 [2024-11-15 11:31:37.841374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:55.085 [2024-11-15 11:31:37.841385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.085 [2024-11-15 11:31:37.841402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:55.085 [2024-11-15 11:31:37.841414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:19:55.085 [2024-11-15 11:31:37.841426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.085 [2024-11-15 11:31:37.857909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.086 [2024-11-15 11:31:37.858161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:55.086 [2024-11-15 11:31:37.858191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.458 ms 00:19:55.086 [2024-11-15 11:31:37.858205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:37.858817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.086 [2024-11-15 11:31:37.858840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:55.086 [2024-11-15 11:31:37.858869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:19:55.086 [2024-11-15 11:31:37.858879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:37.903721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.086 [2024-11-15 11:31:37.903787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.086 [2024-11-15 11:31:37.903805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.086 [2024-11-15 11:31:37.903817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:37.903977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.086 [2024-11-15 11:31:37.903996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.086 [2024-11-15 11:31:37.904010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.086 [2024-11-15 11:31:37.904022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:37.904119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.086 [2024-11-15 11:31:37.904138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.086 [2024-11-15 11:31:37.904152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.086 [2024-11-15 11:31:37.904165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:37.904191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.086 [2024-11-15 11:31:37.904227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.086 [2024-11-15 11:31:37.904240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.086 [2024-11-15 11:31:37.904253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.086 [2024-11-15 11:31:38.009107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.086 [2024-11-15 11:31:38.009173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.086 [2024-11-15 11:31:38.009209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.086 [2024-11-15 11:31:38.009222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.091706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.091771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.346 [2024-11-15 11:31:38.091821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.091833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.091921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.091939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.346 [2024-11-15 11:31:38.091952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.091965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.092017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.346 [2024-11-15 11:31:38.092104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.092117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.092281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.346 [2024-11-15 11:31:38.092296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.092308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.092379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:55.346 [2024-11-15 11:31:38.092393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.092421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.092489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.346 [2024-11-15 11:31:38.092502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.092514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.346 [2024-11-15 11:31:38.092595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.346 [2024-11-15 11:31:38.092619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.346 [2024-11-15 11:31:38.092631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.346 [2024-11-15 11:31:38.092824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 427.159 ms, result 0 00:19:56.282 00:19:56.282 00:19:56.282 11:31:38 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:56.282 11:31:38 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:56.541 11:31:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:56.799 [2024-11-15 11:31:39.577160] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:56.799 [2024-11-15 11:31:39.577324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76041 ] 00:19:57.058 [2024-11-15 11:31:39.753525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.058 [2024-11-15 11:31:39.857138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.316 [2024-11-15 11:31:40.176556] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.316 [2024-11-15 11:31:40.176653] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.577 [2024-11-15 11:31:40.338623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.338686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.577 [2024-11-15 11:31:40.338723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.577 [2024-11-15 11:31:40.338734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.342218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.342260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.577 [2024-11-15 11:31:40.342291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:19:57.577 [2024-11-15 11:31:40.342302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.342470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.577 [2024-11-15 11:31:40.343478] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.577 [2024-11-15 11:31:40.343524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.343540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.577 [2024-11-15 11:31:40.343553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:19:57.577 [2024-11-15 11:31:40.343564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.345836] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.577 [2024-11-15 11:31:40.361065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.361140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.577 [2024-11-15 11:31:40.361171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.230 ms 00:19:57.577 [2024-11-15 11:31:40.361183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.361298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.361319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.577 [2024-11-15 11:31:40.361331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:57.577 [2024-11-15 11:31:40.361342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.370416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.370482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.577 [2024-11-15 11:31:40.370512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.993 ms 00:19:57.577 [2024-11-15 11:31:40.370523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.370638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.370658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.577 [2024-11-15 11:31:40.370670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:57.577 [2024-11-15 11:31:40.370681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.370717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.370736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.577 [2024-11-15 11:31:40.370747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:57.577 [2024-11-15 11:31:40.370758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.370786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.577 [2024-11-15 11:31:40.375391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.375638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.577 [2024-11-15 11:31:40.375663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.612 ms 00:19:57.577 [2024-11-15 11:31:40.375675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.375763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.577 [2024-11-15 11:31:40.375787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.577 [2024-11-15 11:31:40.375813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:57.577 [2024-11-15 11:31:40.375832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.577 [2024-11-15 11:31:40.375870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.577 [2024-11-15 11:31:40.375904] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:57.577 [2024-11-15 11:31:40.375943] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.578 [2024-11-15 11:31:40.375962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:57.578 [2024-11-15 11:31:40.376105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:57.578 [2024-11-15 11:31:40.376126] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.578 [2024-11-15 11:31:40.376141] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:57.578 [2024-11-15 11:31:40.376176] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376208] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.578 [2024-11-15 11:31:40.376230] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.578 [2024-11-15 11:31:40.376240] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:57.578 [2024-11-15 11:31:40.376250] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:57.578 [2024-11-15 11:31:40.376262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.578 [2024-11-15 11:31:40.376274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.578 [2024-11-15 11:31:40.376284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:19:57.578 [2024-11-15 11:31:40.376305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.578 [2024-11-15 11:31:40.376413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.578 [2024-11-15 11:31:40.376450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.578 [2024-11-15 11:31:40.376476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:57.578 [2024-11-15 11:31:40.376487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.578 [2024-11-15 11:31:40.376601] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.578 [2024-11-15 11:31:40.376626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.578 [2024-11-15 11:31:40.376639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.578 [2024-11-15 11:31:40.376672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.578 [2024-11-15 11:31:40.376702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.578 [2024-11-15 11:31:40.376722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.578 [2024-11-15 11:31:40.376732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.578 [2024-11-15 11:31:40.376742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.578 [2024-11-15 11:31:40.376765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.578 [2024-11-15 11:31:40.376776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:57.578 [2024-11-15 11:31:40.376792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.578 [2024-11-15 11:31:40.376828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.578 [2024-11-15 11:31:40.376873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.578 [2024-11-15 11:31:40.376911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.578 [2024-11-15 11:31:40.376938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.578 [2024-11-15 11:31:40.376966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:57.578 [2024-11-15 11:31:40.376975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.578 [2024-11-15 11:31:40.376984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.578 [2024-11-15 11:31:40.376995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:57.578 [2024-11-15 11:31:40.377005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.578 [2024-11-15 11:31:40.377015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.578 [2024-11-15 11:31:40.377024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:57.578 [2024-11-15 11:31:40.377033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.578 [2024-11-15 11:31:40.377044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:57.578 [2024-11-15 11:31:40.377054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:57.578 [2024-11-15 11:31:40.377064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.377113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:57.578 [2024-11-15 11:31:40.377127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:57.578 [2024-11-15 11:31:40.377138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.377148] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.578 [2024-11-15 11:31:40.377160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.578 [2024-11-15 11:31:40.377171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.578 [2024-11-15 11:31:40.377187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.578 [2024-11-15 11:31:40.377200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.578 [2024-11-15 11:31:40.377211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.578 [2024-11-15 11:31:40.377222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.578 [2024-11-15 11:31:40.377233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.578 [2024-11-15 11:31:40.377243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.578 [2024-11-15 11:31:40.377253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.578 [2024-11-15 11:31:40.377266] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.578 [2024-11-15 11:31:40.377280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.578 [2024-11-15 11:31:40.377304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:57.578 [2024-11-15 11:31:40.377315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.578 [2024-11-15 11:31:40.377325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:57.578 [2024-11-15 11:31:40.377336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:57.578 [2024-11-15 11:31:40.377347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:57.578 [2024-11-15 11:31:40.377358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:57.578 [2024-11-15 11:31:40.377368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:57.578 [2024-11-15 11:31:40.377379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:57.578 [2024-11-15 11:31:40.377404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:57.578 [2024-11-15 11:31:40.377488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.578 [2024-11-15 11:31:40.377500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.578 [2024-11-15 11:31:40.377521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.578 [2024-11-15 11:31:40.377531] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.578 [2024-11-15 11:31:40.377550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.578 [2024-11-15 11:31:40.377562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.578 [2024-11-15 11:31:40.377572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.578 [2024-11-15 11:31:40.377587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:19:57.578 [2024-11-15 11:31:40.377597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.578 [2024-11-15 11:31:40.415016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.578 [2024-11-15 11:31:40.415250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.579 [2024-11-15 11:31:40.415403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.342 ms 00:19:57.579 [2024-11-15 11:31:40.415452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.415725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.415872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.579 [2024-11-15 11:31:40.415984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:57.579 [2024-11-15 11:31:40.416064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.466126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.466333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.579 [2024-11-15 11:31:40.466445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.861 ms 00:19:57.579 [2024-11-15 11:31:40.466500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.466669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.466731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.579 [2024-11-15 11:31:40.466770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.579 [2024-11-15 11:31:40.466872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.467508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.467639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.579 [2024-11-15 11:31:40.467742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:57.579 [2024-11-15 11:31:40.467851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.468091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.468148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.579 [2024-11-15 11:31:40.468242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:19:57.579 [2024-11-15 11:31:40.468368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.486669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.486826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.579 [2024-11-15 11:31:40.486925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.208 ms 00:19:57.579 [2024-11-15 11:31:40.486970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.579 [2024-11-15 11:31:40.501000] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:57.579 [2024-11-15 11:31:40.501259] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.579 [2024-11-15 11:31:40.501388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.579 [2024-11-15 11:31:40.501443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.579 [2024-11-15 11:31:40.501479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.210 ms 00:19:57.579 [2024-11-15 11:31:40.501612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.837 [2024-11-15 11:31:40.525305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.837 [2024-11-15 11:31:40.525356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.837 [2024-11-15 11:31:40.525387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.541 ms 00:19:57.837 [2024-11-15 11:31:40.525398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.837 [2024-11-15 11:31:40.537849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.837 [2024-11-15 11:31:40.537889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.837 [2024-11-15 11:31:40.537903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.352 ms 00:19:57.837 [2024-11-15 11:31:40.537913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.837 [2024-11-15 11:31:40.550199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.837 [2024-11-15 11:31:40.550237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.837 [2024-11-15 11:31:40.550252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.207 ms 00:19:57.837 [2024-11-15 11:31:40.550262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.837 [2024-11-15 11:31:40.550934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.550963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.838 [2024-11-15 11:31:40.550976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:57.838 [2024-11-15 11:31:40.550988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.619595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.619649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.838 [2024-11-15 11:31:40.619684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.575 ms 00:19:57.838 [2024-11-15 11:31:40.619695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.631001] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.838 [2024-11-15 11:31:40.650194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.650251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.838 [2024-11-15 11:31:40.650269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.344 ms 00:19:57.838 [2024-11-15 11:31:40.650287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.650410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.650428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.838 [2024-11-15 11:31:40.650457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:57.838 [2024-11-15 11:31:40.650468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.650546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.650562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.838 [2024-11-15 11:31:40.650574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:57.838 [2024-11-15 11:31:40.650585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.650634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.650651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.838 [2024-11-15 11:31:40.650663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:57.838 [2024-11-15 11:31:40.650673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.650713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.838 [2024-11-15 11:31:40.650728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.650738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.838 [2024-11-15 11:31:40.650749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:57.838 [2024-11-15 11:31:40.650759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.677696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.677884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.838 [2024-11-15 11:31:40.677911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.908 ms 00:19:57.838 [2024-11-15 11:31:40.677925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.678116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.838 [2024-11-15 11:31:40.678138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.838 [2024-11-15 11:31:40.678151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:57.838 [2024-11-15 11:31:40.678163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.838 [2024-11-15 11:31:40.679646] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.838 [2024-11-15 11:31:40.683251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.638 ms, result 0 00:19:57.838 [2024-11-15 11:31:40.684265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.838 [2024-11-15 11:31:40.698943] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.098  [2024-11-15T11:31:41.047Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-11-15 11:31:40.887636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.098 [2024-11-15 11:31:40.897322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.098 [2024-11-15 11:31:40.897364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:58.098 [2024-11-15 11:31:40.897395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.098 [2024-11-15 11:31:40.897427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.098 [2024-11-15 11:31:40.897469] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:58.098 [2024-11-15 11:31:40.900675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.098 [2024-11-15 11:31:40.900715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:58.098 [2024-11-15 11:31:40.900727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:19:58.098 [2024-11-15 11:31:40.900737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.098 [2024-11-15 11:31:40.902606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.098 [2024-11-15 11:31:40.902655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:58.098 [2024-11-15 11:31:40.902669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.844 ms 00:19:58.098 [2024-11-15 11:31:40.902679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.906067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.906122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:58.099 [2024-11-15 11:31:40.906135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:19:58.099 [2024-11-15 11:31:40.906145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.912099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.912128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:58.099 [2024-11-15 11:31:40.912140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.916 ms 00:19:58.099 [2024-11-15 11:31:40.912150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.937533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.937585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:58.099 [2024-11-15 11:31:40.937599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.322 ms 00:19:58.099 [2024-11-15 11:31:40.937609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.953626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.953684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:58.099 [2024-11-15 11:31:40.953703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.958 ms 00:19:58.099 [2024-11-15 11:31:40.953714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.953873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.953890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:58.099 [2024-11-15 11:31:40.953901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:58.099 [2024-11-15 11:31:40.953910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:40.980781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:40.980816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:58.099 [2024-11-15 11:31:40.980829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.822 ms 00:19:58.099 [2024-11-15 11:31:40.980838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:41.007007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:41.007048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:58.099 [2024-11-15 11:31:41.007065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.111 ms 00:19:58.099 [2024-11-15 11:31:41.007075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.099 [2024-11-15 11:31:41.032410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.099 [2024-11-15 11:31:41.032462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:58.099 [2024-11-15 11:31:41.032476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:19:58.099 [2024-11-15 11:31:41.032485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.360 [2024-11-15 11:31:41.057852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.360 [2024-11-15 11:31:41.057887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:58.360 [2024-11-15 11:31:41.057900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.299 ms 00:19:58.360 [2024-11-15 11:31:41.057917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.360 [2024-11-15 11:31:41.057972] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:58.360 [2024-11-15 11:31:41.057995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:58.360 [2024-11-15 11:31:41.058397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.058996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:58.361 [2024-11-15 11:31:41.059193] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:58.361 [2024-11-15 11:31:41.059204] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:19:58.361 [2024-11-15 11:31:41.059214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:58.361 [2024-11-15 11:31:41.059224] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:58.361 [2024-11-15 11:31:41.059233] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:58.361 [2024-11-15 11:31:41.059243] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:58.361 [2024-11-15 11:31:41.059253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:58.361 [2024-11-15 11:31:41.059263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:58.361 [2024-11-15 11:31:41.059272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:58.361 [2024-11-15 11:31:41.059281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:58.361 [2024-11-15 11:31:41.059289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:58.361 [2024-11-15 11:31:41.059299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.361 [2024-11-15 11:31:41.059315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:58.361 [2024-11-15 11:31:41.059326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:19:58.361 [2024-11-15 11:31:41.059335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.361 [2024-11-15 11:31:41.074361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.361 [2024-11-15 11:31:41.074407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:58.361 [2024-11-15 11:31:41.074421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.001 ms 00:19:58.361 [2024-11-15 11:31:41.074446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.361 [2024-11-15 11:31:41.074934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.361 [2024-11-15 11:31:41.074955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:58.361 [2024-11-15 11:31:41.074967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:58.361 [2024-11-15 11:31:41.074976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.361 [2024-11-15 11:31:41.114136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.361 [2024-11-15 11:31:41.114172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.361 [2024-11-15 11:31:41.114185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.361 [2024-11-15 11:31:41.114195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.361 [2024-11-15 11:31:41.114278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.361 [2024-11-15 11:31:41.114294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.361 [2024-11-15 11:31:41.114304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.362 [2024-11-15 11:31:41.114314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.362 [2024-11-15 11:31:41.114367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.362 [2024-11-15 11:31:41.114384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.362 [2024-11-15 11:31:41.114395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.362 [2024-11-15 11:31:41.114404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.362 [2024-11-15 11:31:41.114425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.362 [2024-11-15 11:31:41.114462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.362 [2024-11-15 11:31:41.114477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.362 [2024-11-15 11:31:41.114486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.362 [2024-11-15 11:31:41.216232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.362 [2024-11-15 11:31:41.216314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.362 [2024-11-15 11:31:41.216348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.362 [2024-11-15 11:31:41.216361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.308516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.308594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.621 [2024-11-15 11:31:41.308613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.308626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.308718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.308736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.621 [2024-11-15 11:31:41.308751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.308763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.308803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.308817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.621 [2024-11-15 11:31:41.308838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.308850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.308985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.309004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.621 [2024-11-15 11:31:41.309018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.309049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.309122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.309141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.621 [2024-11-15 11:31:41.309161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.309173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.309226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.309248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.621 [2024-11-15 11:31:41.309262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.309274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.309333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.621 [2024-11-15 11:31:41.309351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.621 [2024-11-15 11:31:41.309370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.621 [2024-11-15 11:31:41.309382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.621 [2024-11-15 11:31:41.309562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.219 ms, result 0 00:19:59.558 00:19:59.558 00:19:59.558 11:31:42 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76074 00:19:59.558 11:31:42 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:59.558 11:31:42 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76074 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76074 ']' 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:59.558 11:31:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:59.558 [2024-11-15 11:31:42.311640] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:19:59.558 [2024-11-15 11:31:42.311813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76074 ] 00:19:59.558 [2024-11-15 11:31:42.484561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.817 [2024-11-15 11:31:42.582670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.384 11:31:43 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.384 11:31:43 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:00.384 11:31:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:00.642 [2024-11-15 11:31:43.565935] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.642 [2024-11-15 11:31:43.566054] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.902 [2024-11-15 11:31:43.728696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.728761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.902 [2024-11-15 11:31:43.728802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.902 [2024-11-15 11:31:43.728816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.902 [2024-11-15 11:31:43.732583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.732639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.902 [2024-11-15 11:31:43.732673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:20:00.902 [2024-11-15 11:31:43.732685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.902 [2024-11-15 11:31:43.732837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.902 [2024-11-15 11:31:43.733830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.902 [2024-11-15 11:31:43.733887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.733901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.902 [2024-11-15 11:31:43.733915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:20:00.902 [2024-11-15 11:31:43.733926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.902 [2024-11-15 11:31:43.735984] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.902 [2024-11-15 11:31:43.750437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.750533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.902 [2024-11-15 11:31:43.750552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.459 ms 00:20:00.902 [2024-11-15 11:31:43.750571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.902 [2024-11-15 11:31:43.750686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.750713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.902 [2024-11-15 11:31:43.750727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:00.902 [2024-11-15 11:31:43.750775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.902 [2024-11-15 11:31:43.759217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.902 [2024-11-15 11:31:43.759302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.902 [2024-11-15 11:31:43.759318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.368 ms 00:20:00.902 [2024-11-15 11:31:43.759335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.759494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.759555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.903 [2024-11-15 11:31:43.759570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:00.903 [2024-11-15 11:31:43.759608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.759654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.759677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.903 [2024-11-15 11:31:43.759691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.903 [2024-11-15 11:31:43.759707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.759742] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.903 [2024-11-15 11:31:43.764224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.764274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.903 [2024-11-15 11:31:43.764295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.484 ms 00:20:00.903 [2024-11-15 11:31:43.764307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.764398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.764417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.903 [2024-11-15 11:31:43.764435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:00.903 [2024-11-15 11:31:43.764451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.764504] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.903 [2024-11-15 11:31:43.764551] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.903 [2024-11-15 11:31:43.764614] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.903 [2024-11-15 11:31:43.764639] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.903 [2024-11-15 11:31:43.764750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.903 [2024-11-15 11:31:43.764777] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.903 [2024-11-15 11:31:43.764809] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.903 [2024-11-15 11:31:43.764825] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.903 [2024-11-15 11:31:43.764844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.903 [2024-11-15 11:31:43.764857] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.903 [2024-11-15 11:31:43.764873] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.903 [2024-11-15 11:31:43.764885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.903 [2024-11-15 11:31:43.764906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.903 [2024-11-15 11:31:43.764919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.764935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.903 [2024-11-15 11:31:43.764949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:20:00.903 [2024-11-15 11:31:43.764979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.765118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.903 [2024-11-15 11:31:43.765149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.903 [2024-11-15 11:31:43.765164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:00.903 [2024-11-15 11:31:43.765181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.903 [2024-11-15 11:31:43.765303] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.903 [2024-11-15 11:31:43.765332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.903 [2024-11-15 11:31:43.765346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.903 [2024-11-15 11:31:43.765392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.903 [2024-11-15 11:31:43.765431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.903 [2024-11-15 11:31:43.765454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.903 [2024-11-15 11:31:43.765467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.903 [2024-11-15 11:31:43.765477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.903 [2024-11-15 11:31:43.765491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.903 [2024-11-15 11:31:43.765503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.903 [2024-11-15 11:31:43.765516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.903 [2024-11-15 11:31:43.765538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.903 [2024-11-15 11:31:43.765582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.903 [2024-11-15 11:31:43.765621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.903 [2024-11-15 11:31:43.765654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.903 [2024-11-15 11:31:43.765688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.903 [2024-11-15 11:31:43.765725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.903 [2024-11-15 11:31:43.765747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.903 [2024-11-15 11:31:43.765759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.903 [2024-11-15 11:31:43.765769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.903 [2024-11-15 11:31:43.765781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.903 [2024-11-15 11:31:43.765792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.903 [2024-11-15 11:31:43.765806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.903 [2024-11-15 11:31:43.765829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.903 [2024-11-15 11:31:43.765839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.903 [2024-11-15 11:31:43.765866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.903 [2024-11-15 11:31:43.765881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.903 [2024-11-15 11:31:43.765906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.903 [2024-11-15 11:31:43.765917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.903 [2024-11-15 11:31:43.765929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.903 [2024-11-15 11:31:43.765940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.903 [2024-11-15 11:31:43.765952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.903 [2024-11-15 11:31:43.765962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.903 [2024-11-15 11:31:43.765977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.903 [2024-11-15 11:31:43.765991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.903 [2024-11-15 11:31:43.766009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.903 [2024-11-15 11:31:43.766020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.903 [2024-11-15 11:31:43.766073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.903 [2024-11-15 11:31:43.766088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.903 [2024-11-15 11:31:43.766102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.903 [2024-11-15 11:31:43.766113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.903 [2024-11-15 11:31:43.766127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.904 [2024-11-15 11:31:43.766138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.904 [2024-11-15 11:31:43.766151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.904 [2024-11-15 11:31:43.766162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.904 [2024-11-15 11:31:43.766225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.904 [2024-11-15 11:31:43.766238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.904 [2024-11-15 11:31:43.766267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.904 [2024-11-15 11:31:43.766281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.904 [2024-11-15 11:31:43.766292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.904 [2024-11-15 11:31:43.766308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.766325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.904 [2024-11-15 11:31:43.766340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:20:00.904 [2024-11-15 11:31:43.766352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.802466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.802541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.904 [2024-11-15 11:31:43.802578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.014 ms 00:20:00.904 [2024-11-15 11:31:43.802593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.802772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.802791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.904 [2024-11-15 11:31:43.802805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:00.904 [2024-11-15 11:31:43.802832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.842337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.842412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.904 [2024-11-15 11:31:43.842452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.450 ms 00:20:00.904 [2024-11-15 11:31:43.842466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.842603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.842622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.904 [2024-11-15 11:31:43.842642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:00.904 [2024-11-15 11:31:43.842654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.843305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.843347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.904 [2024-11-15 11:31:43.843375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:20:00.904 [2024-11-15 11:31:43.843388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.904 [2024-11-15 11:31:43.843574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.904 [2024-11-15 11:31:43.843608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.904 [2024-11-15 11:31:43.843627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:20:00.904 [2024-11-15 11:31:43.843639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.162 [2024-11-15 11:31:43.863690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.162 [2024-11-15 11:31:43.863753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.162 [2024-11-15 11:31:43.863791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.013 ms 00:20:01.162 [2024-11-15 11:31:43.863804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.162 [2024-11-15 11:31:43.887357] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:01.163 [2024-11-15 11:31:43.887420] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.163 [2024-11-15 11:31:43.887460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:43.887474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.163 [2024-11-15 11:31:43.887493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.507 ms 00:20:01.163 [2024-11-15 11:31:43.887506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:43.912428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:43.912488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.163 [2024-11-15 11:31:43.912528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.821 ms 00:20:01.163 [2024-11-15 11:31:43.912541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:43.925994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:43.926075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.163 [2024-11-15 11:31:43.926105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.341 ms 00:20:01.163 [2024-11-15 11:31:43.926118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:43.940701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:43.940758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.163 [2024-11-15 11:31:43.940796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:20:01.163 [2024-11-15 11:31:43.940809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:43.941850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:43.941901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.163 [2024-11-15 11:31:43.941938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:20:01.163 [2024-11-15 11:31:43.941951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.015078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.015161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.163 [2024-11-15 11:31:44.015205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.086 ms 00:20:01.163 [2024-11-15 11:31:44.015220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.026613] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.163 [2024-11-15 11:31:44.045474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.045584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.163 [2024-11-15 11:31:44.045612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.112 ms 00:20:01.163 [2024-11-15 11:31:44.045630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.045763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.045805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.163 [2024-11-15 11:31:44.045836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:01.163 [2024-11-15 11:31:44.045854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.045930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.045955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.163 [2024-11-15 11:31:44.045970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:01.163 [2024-11-15 11:31:44.045995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.046049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.046075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.163 [2024-11-15 11:31:44.046090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:01.163 [2024-11-15 11:31:44.046111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.046164] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.163 [2024-11-15 11:31:44.046198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.046211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.163 [2024-11-15 11:31:44.046237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:01.163 [2024-11-15 11:31:44.046250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.073941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.073986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.163 [2024-11-15 11:31:44.074027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.640 ms 00:20:01.163 [2024-11-15 11:31:44.074052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.074194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.163 [2024-11-15 11:31:44.074214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.163 [2024-11-15 11:31:44.074266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:01.163 [2024-11-15 11:31:44.074285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.163 [2024-11-15 11:31:44.075748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.163 [2024-11-15 11:31:44.079772] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.590 ms, result 0 00:20:01.163 [2024-11-15 11:31:44.081564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.163 Some configs were skipped because the RPC state that can call them passed over. 00:20:01.422 11:31:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:01.422 [2024-11-15 11:31:44.333973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.422 [2024-11-15 11:31:44.334101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:01.422 [2024-11-15 11:31:44.334125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.630 ms 00:20:01.422 [2024-11-15 11:31:44.334141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.422 [2024-11-15 11:31:44.334220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.851 ms, result 0 00:20:01.422 true 00:20:01.422 11:31:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:01.680 [2024-11-15 11:31:44.566295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.680 [2024-11-15 11:31:44.566371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:01.680 [2024-11-15 11:31:44.566439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:20:01.680 [2024-11-15 11:31:44.566452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.680 [2024-11-15 11:31:44.566534] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.849 ms, result 0 00:20:01.680 true 00:20:01.681 11:31:44 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76074 00:20:01.681 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76074 ']' 00:20:01.681 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76074 00:20:01.681 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:01.681 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:01.681 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76074 00:20:01.939 killing process with pid 76074 00:20:01.939 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:01.939 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:01.939 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76074' 00:20:01.939 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76074 00:20:01.939 11:31:44 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76074 00:20:02.877 [2024-11-15 11:31:45.508519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.508632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.877 [2024-11-15 11:31:45.508653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.877 [2024-11-15 11:31:45.508667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.508701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:02.877 [2024-11-15 11:31:45.512089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.512136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.877 [2024-11-15 11:31:45.512171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.362 ms 00:20:02.877 [2024-11-15 11:31:45.512182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.512524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.512553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.877 [2024-11-15 11:31:45.512569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:02.877 [2024-11-15 11:31:45.512581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.516300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.516356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.877 [2024-11-15 11:31:45.516379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:20:02.877 [2024-11-15 11:31:45.516392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.522872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.522924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.877 [2024-11-15 11:31:45.522956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:20:02.877 [2024-11-15 11:31:45.522967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.533795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.533852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.877 [2024-11-15 11:31:45.533887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.767 ms 00:20:02.877 [2024-11-15 11:31:45.533909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.542481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.542542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.877 [2024-11-15 11:31:45.542575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.523 ms 00:20:02.877 [2024-11-15 11:31:45.542587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.542734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.542753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.877 [2024-11-15 11:31:45.542783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:02.877 [2024-11-15 11:31:45.542810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.554434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.554489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.877 [2024-11-15 11:31:45.554521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.595 ms 00:20:02.877 [2024-11-15 11:31:45.554531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.565847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.565901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.877 [2024-11-15 11:31:45.565937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.269 ms 00:20:02.877 [2024-11-15 11:31:45.565947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.576784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.576836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.877 [2024-11-15 11:31:45.576871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.790 ms 00:20:02.877 [2024-11-15 11:31:45.576883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.587854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.877 [2024-11-15 11:31:45.587908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.877 [2024-11-15 11:31:45.587941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.896 ms 00:20:02.877 [2024-11-15 11:31:45.587951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.877 [2024-11-15 11:31:45.587996] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.877 [2024-11-15 11:31:45.588019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.877 [2024-11-15 11:31:45.588484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.588988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.878 [2024-11-15 11:31:45.589668] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.878 [2024-11-15 11:31:45.589697] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:20:02.878 [2024-11-15 11:31:45.589724] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.878 [2024-11-15 11:31:45.589750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.878 [2024-11-15 11:31:45.589762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.878 [2024-11-15 11:31:45.589779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.878 [2024-11-15 11:31:45.589790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.878 [2024-11-15 11:31:45.589807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.878 [2024-11-15 11:31:45.589820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.878 [2024-11-15 11:31:45.589835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.878 [2024-11-15 11:31:45.589846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.878 [2024-11-15 11:31:45.589864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.878 [2024-11-15 11:31:45.589877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.878 [2024-11-15 11:31:45.589895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:20:02.878 [2024-11-15 11:31:45.589907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.878 [2024-11-15 11:31:45.605230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.878 [2024-11-15 11:31:45.605287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.878 [2024-11-15 11:31:45.605315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.260 ms 00:20:02.878 [2024-11-15 11:31:45.605329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.878 [2024-11-15 11:31:45.605888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.878 [2024-11-15 11:31:45.605951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.878 [2024-11-15 11:31:45.605988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:20:02.878 [2024-11-15 11:31:45.606006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.878 [2024-11-15 11:31:45.659529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.879 [2024-11-15 11:31:45.659605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.879 [2024-11-15 11:31:45.659642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.879 [2024-11-15 11:31:45.659654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.879 [2024-11-15 11:31:45.659788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.879 [2024-11-15 11:31:45.659806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.879 [2024-11-15 11:31:45.659821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.879 [2024-11-15 11:31:45.659834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.879 [2024-11-15 11:31:45.659934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.879 [2024-11-15 11:31:45.659952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.879 [2024-11-15 11:31:45.659970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.879 [2024-11-15 11:31:45.659982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.879 [2024-11-15 11:31:45.660011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.879 [2024-11-15 11:31:45.660025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.879 [2024-11-15 11:31:45.660039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.879 [2024-11-15 11:31:45.660050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.879 [2024-11-15 11:31:45.760947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.879 [2024-11-15 11:31:45.761015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.879 [2024-11-15 11:31:45.761104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.879 [2024-11-15 11:31:45.761122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.839438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.839522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.138 [2024-11-15 11:31:45.839563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.839583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.839701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.839736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.138 [2024-11-15 11:31:45.839760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.839773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.839849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.839866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.138 [2024-11-15 11:31:45.839884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.839897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.840079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.840108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.138 [2024-11-15 11:31:45.840129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.840143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.840211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.840230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.138 [2024-11-15 11:31:45.840249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.840276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.840354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.840373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.138 [2024-11-15 11:31:45.840397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.840410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.840505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.138 [2024-11-15 11:31:45.840524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.138 [2024-11-15 11:31:45.840543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.138 [2024-11-15 11:31:45.840563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.138 [2024-11-15 11:31:45.840744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 332.192 ms, result 0 00:20:04.075 11:31:46 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:04.075 [2024-11-15 11:31:46.836164] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:04.075 [2024-11-15 11:31:46.836345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76132 ] 00:20:04.075 [2024-11-15 11:31:47.018568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.335 [2024-11-15 11:31:47.126112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.594 [2024-11-15 11:31:47.454026] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:04.594 [2024-11-15 11:31:47.454133] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:04.853 [2024-11-15 11:31:47.617454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.617517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:04.853 [2024-11-15 11:31:47.617537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:04.853 [2024-11-15 11:31:47.617549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.620799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.620835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.853 [2024-11-15 11:31:47.620850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:20:04.853 [2024-11-15 11:31:47.620862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.621005] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:04.853 [2024-11-15 11:31:47.621947] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:04.853 [2024-11-15 11:31:47.621975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.621988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.853 [2024-11-15 11:31:47.622000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:20:04.853 [2024-11-15 11:31:47.622011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.623900] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:04.853 [2024-11-15 11:31:47.640561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.640604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:04.853 [2024-11-15 11:31:47.640621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.661 ms 00:20:04.853 [2024-11-15 11:31:47.640634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.640882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.640905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:04.853 [2024-11-15 11:31:47.640919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:04.853 [2024-11-15 11:31:47.640931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.649723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.649783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.853 [2024-11-15 11:31:47.649814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.733 ms 00:20:04.853 [2024-11-15 11:31:47.649825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.649955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.649974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.853 [2024-11-15 11:31:47.649987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:04.853 [2024-11-15 11:31:47.649997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.650069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.650139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:04.853 [2024-11-15 11:31:47.650152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:04.853 [2024-11-15 11:31:47.650164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.650202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:04.853 [2024-11-15 11:31:47.654884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.654937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.853 [2024-11-15 11:31:47.654968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:20:04.853 [2024-11-15 11:31:47.654979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.655073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.853 [2024-11-15 11:31:47.655094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:04.853 [2024-11-15 11:31:47.655106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:04.853 [2024-11-15 11:31:47.655117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.853 [2024-11-15 11:31:47.655148] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:04.854 [2024-11-15 11:31:47.655181] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:04.854 [2024-11-15 11:31:47.655221] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:04.854 [2024-11-15 11:31:47.655241] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:04.854 [2024-11-15 11:31:47.655361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:04.854 [2024-11-15 11:31:47.655376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:04.854 [2024-11-15 11:31:47.655390] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:04.854 [2024-11-15 11:31:47.655404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:04.854 [2024-11-15 11:31:47.655423] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:04.854 [2024-11-15 11:31:47.655435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:04.854 [2024-11-15 11:31:47.655446] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:04.854 [2024-11-15 11:31:47.655457] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:04.854 [2024-11-15 11:31:47.655467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:04.854 [2024-11-15 11:31:47.655479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.854 [2024-11-15 11:31:47.655490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:04.854 [2024-11-15 11:31:47.655503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:20:04.854 [2024-11-15 11:31:47.655514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.854 [2024-11-15 11:31:47.655608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.854 [2024-11-15 11:31:47.655629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:04.854 [2024-11-15 11:31:47.655640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:04.854 [2024-11-15 11:31:47.655651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.854 [2024-11-15 11:31:47.655758] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:04.854 [2024-11-15 11:31:47.655775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:04.854 [2024-11-15 11:31:47.655787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.854 [2024-11-15 11:31:47.655798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.655810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:04.854 [2024-11-15 11:31:47.655820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.655830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:04.854 [2024-11-15 11:31:47.655840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:04.854 [2024-11-15 11:31:47.655856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:04.854 [2024-11-15 11:31:47.655866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.854 [2024-11-15 11:31:47.655876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:04.854 [2024-11-15 11:31:47.655886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:04.854 [2024-11-15 11:31:47.655896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.854 [2024-11-15 11:31:47.655936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:04.854 [2024-11-15 11:31:47.655947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:04.854 [2024-11-15 11:31:47.655957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.655967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:04.854 [2024-11-15 11:31:47.655992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:04.854 [2024-11-15 11:31:47.656037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:04.854 [2024-11-15 11:31:47.656068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:04.854 [2024-11-15 11:31:47.656119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:04.854 [2024-11-15 11:31:47.656151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:04.854 [2024-11-15 11:31:47.656182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.854 [2024-11-15 11:31:47.656203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:04.854 [2024-11-15 11:31:47.656213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:04.854 [2024-11-15 11:31:47.656223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.854 [2024-11-15 11:31:47.656233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:04.854 [2024-11-15 11:31:47.656244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:04.854 [2024-11-15 11:31:47.656254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:04.854 [2024-11-15 11:31:47.656275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:04.854 [2024-11-15 11:31:47.656301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656311] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:04.854 [2024-11-15 11:31:47.656325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:04.854 [2024-11-15 11:31:47.656336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.854 [2024-11-15 11:31:47.656365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:04.854 [2024-11-15 11:31:47.656376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:04.854 [2024-11-15 11:31:47.656388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:04.854 [2024-11-15 11:31:47.656399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:04.854 [2024-11-15 11:31:47.656410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:04.854 [2024-11-15 11:31:47.656421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:04.854 [2024-11-15 11:31:47.656434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:04.854 [2024-11-15 11:31:47.656449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:04.854 [2024-11-15 11:31:47.656474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:04.854 [2024-11-15 11:31:47.656487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:04.854 [2024-11-15 11:31:47.656499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:04.854 [2024-11-15 11:31:47.656525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:04.854 [2024-11-15 11:31:47.656537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:04.854 [2024-11-15 11:31:47.656548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:04.854 [2024-11-15 11:31:47.656559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:04.854 [2024-11-15 11:31:47.656570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:04.854 [2024-11-15 11:31:47.656582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:04.854 [2024-11-15 11:31:47.656637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:04.854 [2024-11-15 11:31:47.656650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:04.854 [2024-11-15 11:31:47.656682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:04.854 [2024-11-15 11:31:47.656694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:04.854 [2024-11-15 11:31:47.656705] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:04.855 [2024-11-15 11:31:47.656733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.656746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:04.855 [2024-11-15 11:31:47.656777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:20:04.855 [2024-11-15 11:31:47.656790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.695693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.695767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.855 [2024-11-15 11:31:47.695802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.799 ms 00:20:04.855 [2024-11-15 11:31:47.695814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.696041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.696084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.855 [2024-11-15 11:31:47.696114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:04.855 [2024-11-15 11:31:47.696127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.746088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.746159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.855 [2024-11-15 11:31:47.746193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.926 ms 00:20:04.855 [2024-11-15 11:31:47.746210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.746364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.746384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.855 [2024-11-15 11:31:47.746413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.855 [2024-11-15 11:31:47.746440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.747029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.747049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.855 [2024-11-15 11:31:47.747062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:20:04.855 [2024-11-15 11:31:47.747079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.747263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.747283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.855 [2024-11-15 11:31:47.747296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:04.855 [2024-11-15 11:31:47.747307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.766726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.766770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.855 [2024-11-15 11:31:47.766804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.388 ms 00:20:04.855 [2024-11-15 11:31:47.766816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.855 [2024-11-15 11:31:47.783287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:04.855 [2024-11-15 11:31:47.783348] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:04.855 [2024-11-15 11:31:47.783381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.855 [2024-11-15 11:31:47.783393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:04.855 [2024-11-15 11:31:47.783420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.406 ms 00:20:04.855 [2024-11-15 11:31:47.783432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.809068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.809163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:05.113 [2024-11-15 11:31:47.809181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.532 ms 00:20:05.113 [2024-11-15 11:31:47.809194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.823076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.823132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:05.113 [2024-11-15 11:31:47.823163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.781 ms 00:20:05.113 [2024-11-15 11:31:47.823174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.836604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.836661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:05.113 [2024-11-15 11:31:47.836691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.341 ms 00:20:05.113 [2024-11-15 11:31:47.836702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.837643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.837694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.113 [2024-11-15 11:31:47.837724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:20:05.113 [2024-11-15 11:31:47.837735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.907057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.907165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:05.113 [2024-11-15 11:31:47.907200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.273 ms 00:20:05.113 [2024-11-15 11:31:47.907212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.917573] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:05.113 [2024-11-15 11:31:47.935642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.935698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:05.113 [2024-11-15 11:31:47.935732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.298 ms 00:20:05.113 [2024-11-15 11:31:47.935749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.935868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.935887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:05.113 [2024-11-15 11:31:47.935900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:05.113 [2024-11-15 11:31:47.935910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.935996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.936044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.113 [2024-11-15 11:31:47.936056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:05.113 [2024-11-15 11:31:47.936067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.936160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.936181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.113 [2024-11-15 11:31:47.936194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:05.113 [2024-11-15 11:31:47.936205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.936251] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:05.113 [2024-11-15 11:31:47.936267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.936279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:05.113 [2024-11-15 11:31:47.936291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:05.113 [2024-11-15 11:31:47.936302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.961904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.961947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.113 [2024-11-15 11:31:47.961977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:20:05.113 [2024-11-15 11:31:47.961989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.113 [2024-11-15 11:31:47.962128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.113 [2024-11-15 11:31:47.962149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.113 [2024-11-15 11:31:47.962161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:05.114 [2024-11-15 11:31:47.962172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.114 [2024-11-15 11:31:47.963628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:05.114 [2024-11-15 11:31:47.967170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.776 ms, result 0 00:20:05.114 [2024-11-15 11:31:47.968114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.114 [2024-11-15 11:31:47.982099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.490  [2024-11-15T11:31:50.375Z] Copying: 23/256 [MB] (23 MBps) [2024-11-15T11:31:51.311Z] Copying: 44/256 [MB] (21 MBps) [2024-11-15T11:31:52.246Z] Copying: 66/256 [MB] (22 MBps) [2024-11-15T11:31:53.181Z] Copying: 88/256 [MB] (21 MBps) [2024-11-15T11:31:54.129Z] Copying: 109/256 [MB] (21 MBps) [2024-11-15T11:31:55.063Z] Copying: 131/256 [MB] (21 MBps) [2024-11-15T11:31:56.440Z] Copying: 153/256 [MB] (21 MBps) [2024-11-15T11:31:57.377Z] Copying: 174/256 [MB] (21 MBps) [2024-11-15T11:31:58.311Z] Copying: 196/256 [MB] (21 MBps) [2024-11-15T11:31:59.247Z] Copying: 217/256 [MB] (20 MBps) [2024-11-15T11:32:00.183Z] Copying: 238/256 [MB] (21 MBps) [2024-11-15T11:32:00.450Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-15 11:32:00.188335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.501 [2024-11-15 11:32:00.205710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.501 [2024-11-15 11:32:00.205782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.501 [2024-11-15 11:32:00.205807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:17.501 [2024-11-15 11:32:00.205835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.501 [2024-11-15 11:32:00.205879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.501 [2024-11-15 11:32:00.210475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.501 [2024-11-15 11:32:00.210526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.501 [2024-11-15 11:32:00.210546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:20:17.501 [2024-11-15 11:32:00.210562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.210956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.210990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.502 [2024-11-15 11:32:00.211009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:20:17.502 [2024-11-15 11:32:00.211024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.215678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.215734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.502 [2024-11-15 11:32:00.215753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:20:17.502 [2024-11-15 11:32:00.215777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.225065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.225122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.502 [2024-11-15 11:32:00.225140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.228 ms 00:20:17.502 [2024-11-15 11:32:00.225155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.263444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.263504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.502 [2024-11-15 11:32:00.263525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.219 ms 00:20:17.502 [2024-11-15 11:32:00.263540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.285320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.285390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.502 [2024-11-15 11:32:00.285418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.716 ms 00:20:17.502 [2024-11-15 11:32:00.285434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.285646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.285673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.502 [2024-11-15 11:32:00.285691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:17.502 [2024-11-15 11:32:00.285705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.324572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.324631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.502 [2024-11-15 11:32:00.324653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.816 ms 00:20:17.502 [2024-11-15 11:32:00.324668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.363274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.363331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.502 [2024-11-15 11:32:00.363351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.539 ms 00:20:17.502 [2024-11-15 11:32:00.363367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.401470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.401530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.502 [2024-11-15 11:32:00.401550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.039 ms 00:20:17.502 [2024-11-15 11:32:00.401564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.439934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.502 [2024-11-15 11:32:00.439999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.502 [2024-11-15 11:32:00.440056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.229 ms 00:20:17.502 [2024-11-15 11:32:00.440077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.502 [2024-11-15 11:32:00.440148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.502 [2024-11-15 11:32:00.440175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.502 [2024-11-15 11:32:00.440343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.503 [2024-11-15 11:32:00.440911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.440999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.504 [2024-11-15 11:32:00.441464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.506 [2024-11-15 11:32:00.441580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.510 [2024-11-15 11:32:00.441748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.510 [2024-11-15 11:32:00.441763] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8de081b6-03aa-47ad-91df-ca5aa125d9cd 00:20:17.510 [2024-11-15 11:32:00.441778] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.510 [2024-11-15 11:32:00.441792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.510 [2024-11-15 11:32:00.441806] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.510 [2024-11-15 11:32:00.441820] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.510 [2024-11-15 11:32:00.441834] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.510 [2024-11-15 11:32:00.441848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.510 [2024-11-15 11:32:00.441862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.510 [2024-11-15 11:32:00.441875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.510 [2024-11-15 11:32:00.441888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.510 [2024-11-15 11:32:00.441903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.510 [2024-11-15 11:32:00.441924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.510 [2024-11-15 11:32:00.441940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:20:17.510 [2024-11-15 11:32:00.441954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.463199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.769 [2024-11-15 11:32:00.463253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.769 [2024-11-15 11:32:00.463274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.210 ms 00:20:17.769 [2024-11-15 11:32:00.463289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.463907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.769 [2024-11-15 11:32:00.463944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.769 [2024-11-15 11:32:00.463962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:17.769 [2024-11-15 11:32:00.463977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.508376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.508442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.769 [2024-11-15 11:32:00.508473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.508485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.508600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.508617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.769 [2024-11-15 11:32:00.508629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.508670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.508742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.508759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.769 [2024-11-15 11:32:00.508772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.508783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.508808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.508827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.769 [2024-11-15 11:32:00.508839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.508850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.595032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.595135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.769 [2024-11-15 11:32:00.595169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.595181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.666470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.666556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.769 [2024-11-15 11:32:00.666597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.666609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.666708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.666725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.769 [2024-11-15 11:32:00.666737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.666747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.666782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.666794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.769 [2024-11-15 11:32:00.666813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.666824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.666976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.666995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.769 [2024-11-15 11:32:00.667007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.667018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.667109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.667146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.769 [2024-11-15 11:32:00.667160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.667178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.667229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.667260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.769 [2024-11-15 11:32:00.667272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.667283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.667336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.769 [2024-11-15 11:32:00.667352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.769 [2024-11-15 11:32:00.667370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.769 [2024-11-15 11:32:00.667381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.769 [2024-11-15 11:32:00.667576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.883 ms, result 0 00:20:18.703 00:20:18.703 00:20:18.703 11:32:01 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.271 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:19.271 11:32:02 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76074 00:20:19.271 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76074 ']' 00:20:19.271 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76074 00:20:19.271 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76074) - No such process 00:20:19.271 Process with pid 76074 is not found 00:20:19.271 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 76074 is not found' 00:20:19.271 00:20:19.271 real 1m13.269s 00:20:19.271 user 1m39.855s 00:20:19.271 sys 0m7.496s 00:20:19.271 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:19.271 11:32:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:19.271 ************************************ 00:20:19.271 END TEST ftl_trim 00:20:19.271 ************************************ 00:20:19.271 11:32:02 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:19.271 11:32:02 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:19.271 11:32:02 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:19.271 11:32:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:19.271 ************************************ 00:20:19.271 START TEST ftl_restore 00:20:19.271 ************************************ 00:20:19.271 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:19.530 * Looking for test storage... 00:20:19.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.530 11:32:02 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:19.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.530 --rc genhtml_branch_coverage=1 00:20:19.530 --rc genhtml_function_coverage=1 00:20:19.530 --rc genhtml_legend=1 00:20:19.530 --rc geninfo_all_blocks=1 00:20:19.530 --rc geninfo_unexecuted_blocks=1 00:20:19.530 00:20:19.530 ' 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:19.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.530 --rc genhtml_branch_coverage=1 00:20:19.530 --rc genhtml_function_coverage=1 00:20:19.530 --rc genhtml_legend=1 00:20:19.530 --rc geninfo_all_blocks=1 00:20:19.530 --rc geninfo_unexecuted_blocks=1 00:20:19.530 00:20:19.530 ' 00:20:19.530 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:19.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.530 --rc genhtml_branch_coverage=1 00:20:19.530 --rc genhtml_function_coverage=1 00:20:19.530 --rc genhtml_legend=1 00:20:19.530 --rc geninfo_all_blocks=1 00:20:19.530 --rc geninfo_unexecuted_blocks=1 00:20:19.530 00:20:19.530 ' 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:19.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.531 --rc genhtml_branch_coverage=1 00:20:19.531 --rc genhtml_function_coverage=1 00:20:19.531 --rc genhtml_legend=1 00:20:19.531 --rc geninfo_all_blocks=1 00:20:19.531 --rc geninfo_unexecuted_blocks=1 00:20:19.531 00:20:19.531 ' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.K3AY0w5oBS 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76361 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76361 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76361 ']' 00:20:19.531 11:32:02 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.531 11:32:02 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:19.790 [2024-11-15 11:32:02.516631] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:19.790 [2024-11-15 11:32:02.516818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:20:19.790 [2024-11-15 11:32:02.690653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.049 [2024-11-15 11:32:02.808493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:21.031 11:32:03 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:21.031 11:32:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:21.289 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:21.289 { 00:20:21.289 "name": "nvme0n1", 00:20:21.289 "aliases": [ 00:20:21.289 "66081729-649b-4c0e-9c3a-9d860afcb761" 00:20:21.289 ], 00:20:21.289 "product_name": "NVMe disk", 00:20:21.289 "block_size": 4096, 00:20:21.289 "num_blocks": 1310720, 00:20:21.289 "uuid": "66081729-649b-4c0e-9c3a-9d860afcb761", 00:20:21.289 "numa_id": -1, 00:20:21.289 "assigned_rate_limits": { 00:20:21.289 "rw_ios_per_sec": 0, 00:20:21.289 "rw_mbytes_per_sec": 0, 00:20:21.289 "r_mbytes_per_sec": 0, 00:20:21.289 "w_mbytes_per_sec": 0 00:20:21.289 }, 00:20:21.289 "claimed": true, 00:20:21.289 "claim_type": "read_many_write_one", 00:20:21.289 "zoned": false, 00:20:21.289 "supported_io_types": { 00:20:21.289 "read": true, 00:20:21.289 "write": true, 00:20:21.289 "unmap": true, 00:20:21.289 "flush": true, 00:20:21.289 "reset": true, 00:20:21.289 "nvme_admin": true, 00:20:21.289 "nvme_io": true, 00:20:21.289 "nvme_io_md": false, 00:20:21.289 "write_zeroes": true, 00:20:21.289 "zcopy": false, 00:20:21.289 "get_zone_info": false, 00:20:21.289 "zone_management": false, 00:20:21.289 "zone_append": false, 00:20:21.289 "compare": true, 00:20:21.289 "compare_and_write": false, 00:20:21.289 "abort": true, 00:20:21.289 "seek_hole": false, 00:20:21.289 "seek_data": false, 00:20:21.289 "copy": true, 00:20:21.289 "nvme_iov_md": false 00:20:21.289 }, 00:20:21.289 "driver_specific": { 00:20:21.289 "nvme": [ 00:20:21.289 { 00:20:21.289 "pci_address": "0000:00:11.0", 00:20:21.289 "trid": { 00:20:21.289 "trtype": "PCIe", 00:20:21.289 "traddr": "0000:00:11.0" 00:20:21.289 }, 00:20:21.289 "ctrlr_data": { 00:20:21.289 "cntlid": 0, 00:20:21.289 "vendor_id": "0x1b36", 00:20:21.289 "model_number": "QEMU NVMe Ctrl", 00:20:21.289 "serial_number": "12341", 00:20:21.289 "firmware_revision": "8.0.0", 00:20:21.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:21.289 "oacs": { 00:20:21.289 "security": 0, 00:20:21.289 "format": 1, 00:20:21.289 "firmware": 0, 00:20:21.289 "ns_manage": 1 00:20:21.289 }, 00:20:21.289 "multi_ctrlr": false, 00:20:21.289 "ana_reporting": false 00:20:21.289 }, 00:20:21.289 "vs": { 00:20:21.289 "nvme_version": "1.4" 00:20:21.289 }, 00:20:21.289 "ns_data": { 00:20:21.289 "id": 1, 00:20:21.289 "can_share": false 00:20:21.289 } 00:20:21.289 } 00:20:21.289 ], 00:20:21.289 "mp_policy": "active_passive" 00:20:21.289 } 00:20:21.289 } 00:20:21.289 ]' 00:20:21.289 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:21.548 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:21.548 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:21.548 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:21.548 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:21.548 11:32:04 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:20:21.548 11:32:04 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:21.548 11:32:04 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:21.548 11:32:04 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:21.548 11:32:04 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:21.548 11:32:04 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:21.806 11:32:04 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=52e40787-0334-4e6a-84ab-dcde8abd22d1 00:20:21.807 11:32:04 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:21.807 11:32:04 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52e40787-0334-4e6a-84ab-dcde8abd22d1 00:20:22.065 11:32:04 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:22.323 11:32:05 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=7fa13ea1-f806-43d6-ba0a-c6457be6f4f3 00:20:22.323 11:32:05 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7fa13ea1-f806-43d6-ba0a-c6457be6f4f3 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=1644352a-147d-4e92-acc7-1798baea8094 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1644352a-147d-4e92-acc7-1798baea8094 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=1644352a-147d-4e92-acc7-1798baea8094 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:22.583 11:32:05 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 1644352a-147d-4e92-acc7-1798baea8094 00:20:22.583 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=1644352a-147d-4e92-acc7-1798baea8094 00:20:22.583 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:22.583 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:22.583 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:22.583 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1644352a-147d-4e92-acc7-1798baea8094 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:22.842 { 00:20:22.842 "name": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:22.842 "aliases": [ 00:20:22.842 "lvs/nvme0n1p0" 00:20:22.842 ], 00:20:22.842 "product_name": "Logical Volume", 00:20:22.842 "block_size": 4096, 00:20:22.842 "num_blocks": 26476544, 00:20:22.842 "uuid": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:22.842 "assigned_rate_limits": { 00:20:22.842 "rw_ios_per_sec": 0, 00:20:22.842 "rw_mbytes_per_sec": 0, 00:20:22.842 "r_mbytes_per_sec": 0, 00:20:22.842 "w_mbytes_per_sec": 0 00:20:22.842 }, 00:20:22.842 "claimed": false, 00:20:22.842 "zoned": false, 00:20:22.842 "supported_io_types": { 00:20:22.842 "read": true, 00:20:22.842 "write": true, 00:20:22.842 "unmap": true, 00:20:22.842 "flush": false, 00:20:22.842 "reset": true, 00:20:22.842 "nvme_admin": false, 00:20:22.842 "nvme_io": false, 00:20:22.842 "nvme_io_md": false, 00:20:22.842 "write_zeroes": true, 00:20:22.842 "zcopy": false, 00:20:22.842 "get_zone_info": false, 00:20:22.842 "zone_management": false, 00:20:22.842 "zone_append": false, 00:20:22.842 "compare": false, 00:20:22.842 "compare_and_write": false, 00:20:22.842 "abort": false, 00:20:22.842 "seek_hole": true, 00:20:22.842 "seek_data": true, 00:20:22.842 "copy": false, 00:20:22.842 "nvme_iov_md": false 00:20:22.842 }, 00:20:22.842 "driver_specific": { 00:20:22.842 "lvol": { 00:20:22.842 "lvol_store_uuid": "7fa13ea1-f806-43d6-ba0a-c6457be6f4f3", 00:20:22.842 "base_bdev": "nvme0n1", 00:20:22.842 "thin_provision": true, 00:20:22.842 "num_allocated_clusters": 0, 00:20:22.842 "snapshot": false, 00:20:22.842 "clone": false, 00:20:22.842 "esnap_clone": false 00:20:22.842 } 00:20:22.842 } 00:20:22.842 } 00:20:22.842 ]' 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:22.842 11:32:05 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:22.842 11:32:05 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:22.842 11:32:05 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:22.842 11:32:05 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:23.100 11:32:06 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:23.100 11:32:06 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:23.100 11:32:06 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 1644352a-147d-4e92-acc7-1798baea8094 00:20:23.100 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=1644352a-147d-4e92-acc7-1798baea8094 00:20:23.101 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:23.101 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:23.101 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:23.101 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1644352a-147d-4e92-acc7-1798baea8094 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:23.668 { 00:20:23.668 "name": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:23.668 "aliases": [ 00:20:23.668 "lvs/nvme0n1p0" 00:20:23.668 ], 00:20:23.668 "product_name": "Logical Volume", 00:20:23.668 "block_size": 4096, 00:20:23.668 "num_blocks": 26476544, 00:20:23.668 "uuid": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:23.668 "assigned_rate_limits": { 00:20:23.668 "rw_ios_per_sec": 0, 00:20:23.668 "rw_mbytes_per_sec": 0, 00:20:23.668 "r_mbytes_per_sec": 0, 00:20:23.668 "w_mbytes_per_sec": 0 00:20:23.668 }, 00:20:23.668 "claimed": false, 00:20:23.668 "zoned": false, 00:20:23.668 "supported_io_types": { 00:20:23.668 "read": true, 00:20:23.668 "write": true, 00:20:23.668 "unmap": true, 00:20:23.668 "flush": false, 00:20:23.668 "reset": true, 00:20:23.668 "nvme_admin": false, 00:20:23.668 "nvme_io": false, 00:20:23.668 "nvme_io_md": false, 00:20:23.668 "write_zeroes": true, 00:20:23.668 "zcopy": false, 00:20:23.668 "get_zone_info": false, 00:20:23.668 "zone_management": false, 00:20:23.668 "zone_append": false, 00:20:23.668 "compare": false, 00:20:23.668 "compare_and_write": false, 00:20:23.668 "abort": false, 00:20:23.668 "seek_hole": true, 00:20:23.668 "seek_data": true, 00:20:23.668 "copy": false, 00:20:23.668 "nvme_iov_md": false 00:20:23.668 }, 00:20:23.668 "driver_specific": { 00:20:23.668 "lvol": { 00:20:23.668 "lvol_store_uuid": "7fa13ea1-f806-43d6-ba0a-c6457be6f4f3", 00:20:23.668 "base_bdev": "nvme0n1", 00:20:23.668 "thin_provision": true, 00:20:23.668 "num_allocated_clusters": 0, 00:20:23.668 "snapshot": false, 00:20:23.668 "clone": false, 00:20:23.668 "esnap_clone": false 00:20:23.668 } 00:20:23.668 } 00:20:23.668 } 00:20:23.668 ]' 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:23.668 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:23.668 11:32:06 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:23.668 11:32:06 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:23.927 11:32:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:23.927 11:32:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 1644352a-147d-4e92-acc7-1798baea8094 00:20:23.927 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=1644352a-147d-4e92-acc7-1798baea8094 00:20:23.927 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:23.927 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:23.927 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:23.927 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1644352a-147d-4e92-acc7-1798baea8094 00:20:24.186 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:24.186 { 00:20:24.186 "name": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:24.186 "aliases": [ 00:20:24.186 "lvs/nvme0n1p0" 00:20:24.186 ], 00:20:24.186 "product_name": "Logical Volume", 00:20:24.186 "block_size": 4096, 00:20:24.186 "num_blocks": 26476544, 00:20:24.186 "uuid": "1644352a-147d-4e92-acc7-1798baea8094", 00:20:24.186 "assigned_rate_limits": { 00:20:24.186 "rw_ios_per_sec": 0, 00:20:24.186 "rw_mbytes_per_sec": 0, 00:20:24.186 "r_mbytes_per_sec": 0, 00:20:24.186 "w_mbytes_per_sec": 0 00:20:24.186 }, 00:20:24.186 "claimed": false, 00:20:24.186 "zoned": false, 00:20:24.186 "supported_io_types": { 00:20:24.186 "read": true, 00:20:24.186 "write": true, 00:20:24.186 "unmap": true, 00:20:24.186 "flush": false, 00:20:24.186 "reset": true, 00:20:24.186 "nvme_admin": false, 00:20:24.186 "nvme_io": false, 00:20:24.186 "nvme_io_md": false, 00:20:24.186 "write_zeroes": true, 00:20:24.186 "zcopy": false, 00:20:24.186 "get_zone_info": false, 00:20:24.186 "zone_management": false, 00:20:24.186 "zone_append": false, 00:20:24.186 "compare": false, 00:20:24.186 "compare_and_write": false, 00:20:24.186 "abort": false, 00:20:24.186 "seek_hole": true, 00:20:24.186 "seek_data": true, 00:20:24.186 "copy": false, 00:20:24.186 "nvme_iov_md": false 00:20:24.186 }, 00:20:24.186 "driver_specific": { 00:20:24.186 "lvol": { 00:20:24.186 "lvol_store_uuid": "7fa13ea1-f806-43d6-ba0a-c6457be6f4f3", 00:20:24.186 "base_bdev": "nvme0n1", 00:20:24.186 "thin_provision": true, 00:20:24.186 "num_allocated_clusters": 0, 00:20:24.186 "snapshot": false, 00:20:24.186 "clone": false, 00:20:24.186 "esnap_clone": false 00:20:24.186 } 00:20:24.186 } 00:20:24.186 } 00:20:24.186 ]' 00:20:24.186 11:32:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:24.186 11:32:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:24.186 11:32:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:24.186 11:32:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:24.186 11:32:07 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:24.186 11:32:07 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1644352a-147d-4e92-acc7-1798baea8094 --l2p_dram_limit 10' 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:24.186 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:24.186 11:32:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1644352a-147d-4e92-acc7-1798baea8094 --l2p_dram_limit 10 -c nvc0n1p0 00:20:24.447 [2024-11-15 11:32:07.274843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.274922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:24.447 [2024-11-15 11:32:07.274969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:24.447 [2024-11-15 11:32:07.274998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.275141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.275163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.447 [2024-11-15 11:32:07.275184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:24.447 [2024-11-15 11:32:07.275197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.275269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:24.447 [2024-11-15 11:32:07.276347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:24.447 [2024-11-15 11:32:07.276394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.276409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.447 [2024-11-15 11:32:07.276443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:20:24.447 [2024-11-15 11:32:07.276464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.276722] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bf6a3513-021e-47da-a6ae-9434b328e950 00:20:24.447 [2024-11-15 11:32:07.278641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.278714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:24.447 [2024-11-15 11:32:07.278729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:24.447 [2024-11-15 11:32:07.278745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.288852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.288932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.447 [2024-11-15 11:32:07.288966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.015 ms 00:20:24.447 [2024-11-15 11:32:07.288980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.289162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.289188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.447 [2024-11-15 11:32:07.289203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:20:24.447 [2024-11-15 11:32:07.289222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.289331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.289355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:24.447 [2024-11-15 11:32:07.289369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:24.447 [2024-11-15 11:32:07.289387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.289423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.447 [2024-11-15 11:32:07.294385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.294445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.447 [2024-11-15 11:32:07.294479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.969 ms 00:20:24.447 [2024-11-15 11:32:07.294491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.294536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.294552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:24.447 [2024-11-15 11:32:07.294565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:24.447 [2024-11-15 11:32:07.294576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.294627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:24.447 [2024-11-15 11:32:07.294830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:24.447 [2024-11-15 11:32:07.294854] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:24.447 [2024-11-15 11:32:07.294871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:24.447 [2024-11-15 11:32:07.294888] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:24.447 [2024-11-15 11:32:07.294902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:24.447 [2024-11-15 11:32:07.294917] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:24.447 [2024-11-15 11:32:07.294929] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:24.447 [2024-11-15 11:32:07.294946] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:24.447 [2024-11-15 11:32:07.294957] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:24.447 [2024-11-15 11:32:07.294971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.294998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:24.447 [2024-11-15 11:32:07.295012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:20:24.447 [2024-11-15 11:32:07.295038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.295155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.447 [2024-11-15 11:32:07.295188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:24.447 [2024-11-15 11:32:07.295218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:24.447 [2024-11-15 11:32:07.295230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.447 [2024-11-15 11:32:07.295354] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:24.447 [2024-11-15 11:32:07.295382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:24.447 [2024-11-15 11:32:07.295414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:24.447 [2024-11-15 11:32:07.295451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:24.447 [2024-11-15 11:32:07.295488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.447 [2024-11-15 11:32:07.295511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:24.447 [2024-11-15 11:32:07.295522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:24.447 [2024-11-15 11:32:07.295535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:24.447 [2024-11-15 11:32:07.295546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:24.447 [2024-11-15 11:32:07.295559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:24.447 [2024-11-15 11:32:07.295570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:24.447 [2024-11-15 11:32:07.295599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:24.447 [2024-11-15 11:32:07.295637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:24.447 [2024-11-15 11:32:07.295672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:24.447 [2024-11-15 11:32:07.295709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:24.447 [2024-11-15 11:32:07.295743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:24.447 [2024-11-15 11:32:07.295766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:24.447 [2024-11-15 11:32:07.295781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.447 [2024-11-15 11:32:07.295805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:24.447 [2024-11-15 11:32:07.295816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:24.447 [2024-11-15 11:32:07.295829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:24.447 [2024-11-15 11:32:07.295839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:24.447 [2024-11-15 11:32:07.295852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:24.447 [2024-11-15 11:32:07.295863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:24.447 [2024-11-15 11:32:07.295887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:24.447 [2024-11-15 11:32:07.295899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.447 [2024-11-15 11:32:07.295909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:24.447 [2024-11-15 11:32:07.295926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:24.447 [2024-11-15 11:32:07.295938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:24.448 [2024-11-15 11:32:07.295951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:24.448 [2024-11-15 11:32:07.295963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:24.448 [2024-11-15 11:32:07.295979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:24.448 [2024-11-15 11:32:07.295990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:24.448 [2024-11-15 11:32:07.296003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:24.448 [2024-11-15 11:32:07.296013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:24.448 [2024-11-15 11:32:07.296028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:24.448 [2024-11-15 11:32:07.296054] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:24.448 [2024-11-15 11:32:07.296074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:24.448 [2024-11-15 11:32:07.296104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:24.448 [2024-11-15 11:32:07.296116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:24.448 [2024-11-15 11:32:07.296130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:24.448 [2024-11-15 11:32:07.296141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:24.448 [2024-11-15 11:32:07.296155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:24.448 [2024-11-15 11:32:07.296166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:24.448 [2024-11-15 11:32:07.296180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:24.448 [2024-11-15 11:32:07.296191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:24.448 [2024-11-15 11:32:07.296207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:24.448 [2024-11-15 11:32:07.296273] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:24.448 [2024-11-15 11:32:07.296288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:24.448 [2024-11-15 11:32:07.296314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:24.448 [2024-11-15 11:32:07.296326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:24.448 [2024-11-15 11:32:07.296340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:24.448 [2024-11-15 11:32:07.296352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.448 [2024-11-15 11:32:07.296367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:24.448 [2024-11-15 11:32:07.296380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:20:24.448 [2024-11-15 11:32:07.296394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.448 [2024-11-15 11:32:07.296452] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:24.448 [2024-11-15 11:32:07.296477] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:27.736 [2024-11-15 11:32:10.249312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.736 [2024-11-15 11:32:10.249424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:27.736 [2024-11-15 11:32:10.249460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2952.870 ms 00:20:27.736 [2024-11-15 11:32:10.249474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.736 [2024-11-15 11:32:10.283377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.736 [2024-11-15 11:32:10.283473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.736 [2024-11-15 11:32:10.283501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.560 ms 00:20:27.736 [2024-11-15 11:32:10.283525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.736 [2024-11-15 11:32:10.283694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.736 [2024-11-15 11:32:10.283730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:27.736 [2024-11-15 11:32:10.283744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:27.736 [2024-11-15 11:32:10.283765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.328500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.328582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.737 [2024-11-15 11:32:10.328600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.664 ms 00:20:27.737 [2024-11-15 11:32:10.328616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.328673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.328696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.737 [2024-11-15 11:32:10.328708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.737 [2024-11-15 11:32:10.328721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.329479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.329531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.737 [2024-11-15 11:32:10.329546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:20:27.737 [2024-11-15 11:32:10.329560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.329733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.329751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.737 [2024-11-15 11:32:10.329781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:27.737 [2024-11-15 11:32:10.329813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.350216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.350285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.737 [2024-11-15 11:32:10.350302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.376 ms 00:20:27.737 [2024-11-15 11:32:10.350316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.373104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:27.737 [2024-11-15 11:32:10.377173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.377226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:27.737 [2024-11-15 11:32:10.377246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.729 ms 00:20:27.737 [2024-11-15 11:32:10.377259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.452826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.452924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:27.737 [2024-11-15 11:32:10.452963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.509 ms 00:20:27.737 [2024-11-15 11:32:10.452991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.453251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.453276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:27.737 [2024-11-15 11:32:10.453295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:20:27.737 [2024-11-15 11:32:10.453322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.478839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.478896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:27.737 [2024-11-15 11:32:10.478932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.448 ms 00:20:27.737 [2024-11-15 11:32:10.478944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.503785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.503840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:27.737 [2024-11-15 11:32:10.503875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.785 ms 00:20:27.737 [2024-11-15 11:32:10.503887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.504686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.504734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:27.737 [2024-11-15 11:32:10.504766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:20:27.737 [2024-11-15 11:32:10.504780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.578537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.578612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:27.737 [2024-11-15 11:32:10.578652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.707 ms 00:20:27.737 [2024-11-15 11:32:10.578665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.605559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.605636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:27.737 [2024-11-15 11:32:10.605673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.813 ms 00:20:27.737 [2024-11-15 11:32:10.605686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.630766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.630822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:27.737 [2024-11-15 11:32:10.630856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.047 ms 00:20:27.737 [2024-11-15 11:32:10.630867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.656628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.656683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:27.737 [2024-11-15 11:32:10.656717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.728 ms 00:20:27.737 [2024-11-15 11:32:10.656729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.656768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.656783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:27.737 [2024-11-15 11:32:10.656801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:27.737 [2024-11-15 11:32:10.656811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.656928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.737 [2024-11-15 11:32:10.656946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:27.737 [2024-11-15 11:32:10.656964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:27.737 [2024-11-15 11:32:10.656975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.737 [2024-11-15 11:32:10.658493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3383.042 ms, result 0 00:20:27.737 { 00:20:27.737 "name": "ftl0", 00:20:27.737 "uuid": "bf6a3513-021e-47da-a6ae-9434b328e950" 00:20:27.737 } 00:20:27.997 11:32:10 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:27.997 11:32:10 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:28.256 11:32:10 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:28.256 11:32:10 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:28.256 [2024-11-15 11:32:11.165635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.165732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.256 [2024-11-15 11:32:11.165753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:28.256 [2024-11-15 11:32:11.165778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.256 [2024-11-15 11:32:11.165814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.256 [2024-11-15 11:32:11.169023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.169107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.256 [2024-11-15 11:32:11.169130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:20:28.256 [2024-11-15 11:32:11.169142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.256 [2024-11-15 11:32:11.169470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.169503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.256 [2024-11-15 11:32:11.169529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:28.256 [2024-11-15 11:32:11.169540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.256 [2024-11-15 11:32:11.172235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.172278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.256 [2024-11-15 11:32:11.172295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:20:28.256 [2024-11-15 11:32:11.172306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.256 [2024-11-15 11:32:11.177657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.177706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.256 [2024-11-15 11:32:11.177741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:20:28.256 [2024-11-15 11:32:11.177752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.256 [2024-11-15 11:32:11.203742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.256 [2024-11-15 11:32:11.203806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.256 [2024-11-15 11:32:11.203840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.936 ms 00:20:28.256 [2024-11-15 11:32:11.203851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.516 [2024-11-15 11:32:11.220676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.516 [2024-11-15 11:32:11.220732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.516 [2024-11-15 11:32:11.220768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.773 ms 00:20:28.516 [2024-11-15 11:32:11.220780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.516 [2024-11-15 11:32:11.220975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.516 [2024-11-15 11:32:11.220995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.516 [2024-11-15 11:32:11.221011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:28.516 [2024-11-15 11:32:11.221022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.516 [2024-11-15 11:32:11.246363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.516 [2024-11-15 11:32:11.246419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.516 [2024-11-15 11:32:11.246437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.297 ms 00:20:28.516 [2024-11-15 11:32:11.246448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.516 [2024-11-15 11:32:11.271242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.516 [2024-11-15 11:32:11.271297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.516 [2024-11-15 11:32:11.271331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.743 ms 00:20:28.516 [2024-11-15 11:32:11.271341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.516 [2024-11-15 11:32:11.295995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.517 [2024-11-15 11:32:11.296072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.517 [2024-11-15 11:32:11.296092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.602 ms 00:20:28.517 [2024-11-15 11:32:11.296103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.517 [2024-11-15 11:32:11.321167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.517 [2024-11-15 11:32:11.321224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.517 [2024-11-15 11:32:11.321243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.966 ms 00:20:28.517 [2024-11-15 11:32:11.321255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.517 [2024-11-15 11:32:11.321305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.517 [2024-11-15 11:32:11.321329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.321990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.517 [2024-11-15 11:32:11.322635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.518 [2024-11-15 11:32:11.322892] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.518 [2024-11-15 11:32:11.322910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf6a3513-021e-47da-a6ae-9434b328e950 00:20:28.518 [2024-11-15 11:32:11.322922] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.518 [2024-11-15 11:32:11.322937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.518 [2024-11-15 11:32:11.322948] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.518 [2024-11-15 11:32:11.322964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.518 [2024-11-15 11:32:11.322975] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.518 [2024-11-15 11:32:11.322988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.518 [2024-11-15 11:32:11.322999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.518 [2024-11-15 11:32:11.323010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.518 [2024-11-15 11:32:11.323020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.518 [2024-11-15 11:32:11.323033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.518 [2024-11-15 11:32:11.323074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.518 [2024-11-15 11:32:11.323092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:20:28.518 [2024-11-15 11:32:11.323104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.340182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.518 [2024-11-15 11:32:11.340245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.518 [2024-11-15 11:32:11.340266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.007 ms 00:20:28.518 [2024-11-15 11:32:11.340279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.340789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.518 [2024-11-15 11:32:11.340817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.518 [2024-11-15 11:32:11.340838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:20:28.518 [2024-11-15 11:32:11.340850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.390901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.518 [2024-11-15 11:32:11.390970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.518 [2024-11-15 11:32:11.391006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.518 [2024-11-15 11:32:11.391017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.391106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.518 [2024-11-15 11:32:11.391123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.518 [2024-11-15 11:32:11.391140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.518 [2024-11-15 11:32:11.391151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.391275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.518 [2024-11-15 11:32:11.391294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.518 [2024-11-15 11:32:11.391309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.518 [2024-11-15 11:32:11.391320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.518 [2024-11-15 11:32:11.391353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.518 [2024-11-15 11:32:11.391367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.518 [2024-11-15 11:32:11.391381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.518 [2024-11-15 11:32:11.391392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.777 [2024-11-15 11:32:11.479154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.777 [2024-11-15 11:32:11.479265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.777 [2024-11-15 11:32:11.479302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.777 [2024-11-15 11:32:11.479314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.777 [2024-11-15 11:32:11.548897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.777 [2024-11-15 11:32:11.548976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.777 [2024-11-15 11:32:11.549013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.777 [2024-11-15 11:32:11.549028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.777 [2024-11-15 11:32:11.549195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.777 [2024-11-15 11:32:11.549217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.777 [2024-11-15 11:32:11.549231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.549332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.778 [2024-11-15 11:32:11.549351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.778 [2024-11-15 11:32:11.549365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.549503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.778 [2024-11-15 11:32:11.549539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.778 [2024-11-15 11:32:11.549556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.549623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.778 [2024-11-15 11:32:11.549641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.778 [2024-11-15 11:32:11.549656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.549721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.778 [2024-11-15 11:32:11.549746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.778 [2024-11-15 11:32:11.549762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.549863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.778 [2024-11-15 11:32:11.549879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.778 [2024-11-15 11:32:11.549893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.778 [2024-11-15 11:32:11.549904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.778 [2024-11-15 11:32:11.550094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.386 ms, result 0 00:20:28.778 true 00:20:28.778 11:32:11 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76361 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76361 ']' 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76361 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76361 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:28.778 killing process with pid 76361 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76361' 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76361 00:20:28.778 11:32:11 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76361 00:20:34.046 11:32:16 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:38.231 262144+0 records in 00:20:38.231 262144+0 records out 00:20:38.231 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.445 s, 242 MB/s 00:20:38.231 11:32:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:39.606 11:32:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:39.606 [2024-11-15 11:32:22.506700] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:20:39.606 [2024-11-15 11:32:22.506917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76603 ] 00:20:39.867 [2024-11-15 11:32:22.712496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.127 [2024-11-15 11:32:22.867304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.386 [2024-11-15 11:32:23.208626] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.386 [2024-11-15 11:32:23.208719] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:40.646 [2024-11-15 11:32:23.377202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.377274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.646 [2024-11-15 11:32:23.377305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.646 [2024-11-15 11:32:23.377316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.377382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.377399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.646 [2024-11-15 11:32:23.377416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:40.646 [2024-11-15 11:32:23.377426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.377470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.646 [2024-11-15 11:32:23.378289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.646 [2024-11-15 11:32:23.378323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.378336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.646 [2024-11-15 11:32:23.378349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:20:40.646 [2024-11-15 11:32:23.378360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.380192] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:40.646 [2024-11-15 11:32:23.394184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.394242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:40.646 [2024-11-15 11:32:23.394258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.993 ms 00:20:40.646 [2024-11-15 11:32:23.394268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.394349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.394367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:40.646 [2024-11-15 11:32:23.394379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:40.646 [2024-11-15 11:32:23.394389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.402940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.403001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.646 [2024-11-15 11:32:23.403015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.477 ms 00:20:40.646 [2024-11-15 11:32:23.403051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.403162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.403181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.646 [2024-11-15 11:32:23.403193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:40.646 [2024-11-15 11:32:23.403203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.403290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.403308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.646 [2024-11-15 11:32:23.403321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:40.646 [2024-11-15 11:32:23.403331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.403373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.646 [2024-11-15 11:32:23.407793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.407845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.646 [2024-11-15 11:32:23.407859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:20:40.646 [2024-11-15 11:32:23.407879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.407914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.407928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.646 [2024-11-15 11:32:23.407940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:40.646 [2024-11-15 11:32:23.407950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.407995] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:40.646 [2024-11-15 11:32:23.408056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:40.646 [2024-11-15 11:32:23.408098] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:40.646 [2024-11-15 11:32:23.408144] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:40.646 [2024-11-15 11:32:23.408245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:40.646 [2024-11-15 11:32:23.408260] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.646 [2024-11-15 11:32:23.408274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:40.646 [2024-11-15 11:32:23.408288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.646 [2024-11-15 11:32:23.408301] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.646 [2024-11-15 11:32:23.408312] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.646 [2024-11-15 11:32:23.408322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.646 [2024-11-15 11:32:23.408333] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:40.646 [2024-11-15 11:32:23.408353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:40.646 [2024-11-15 11:32:23.408365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.408377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.646 [2024-11-15 11:32:23.408388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:20:40.646 [2024-11-15 11:32:23.408398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.408496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.646 [2024-11-15 11:32:23.408510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.646 [2024-11-15 11:32:23.408522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:40.646 [2024-11-15 11:32:23.408532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.646 [2024-11-15 11:32:23.408651] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.646 [2024-11-15 11:32:23.408671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.646 [2024-11-15 11:32:23.408683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.646 [2024-11-15 11:32:23.408694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.646 [2024-11-15 11:32:23.408704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.646 [2024-11-15 11:32:23.408714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.647 [2024-11-15 11:32:23.408745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.647 [2024-11-15 11:32:23.408765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.647 [2024-11-15 11:32:23.408775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.647 [2024-11-15 11:32:23.408785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.647 [2024-11-15 11:32:23.408795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.647 [2024-11-15 11:32:23.408805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:40.647 [2024-11-15 11:32:23.408832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.647 [2024-11-15 11:32:23.408853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.647 [2024-11-15 11:32:23.408882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.647 [2024-11-15 11:32:23.408910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.647 [2024-11-15 11:32:23.408939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.647 [2024-11-15 11:32:23.408967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:40.647 [2024-11-15 11:32:23.408977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:40.647 [2024-11-15 11:32:23.408986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.647 [2024-11-15 11:32:23.408996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:40.647 [2024-11-15 11:32:23.409006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.647 [2024-11-15 11:32:23.409015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.647 [2024-11-15 11:32:23.409025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:40.647 [2024-11-15 11:32:23.409050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.647 [2024-11-15 11:32:23.409061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:40.647 [2024-11-15 11:32:23.409106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:40.647 [2024-11-15 11:32:23.409118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.409130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:40.647 [2024-11-15 11:32:23.409141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:40.647 [2024-11-15 11:32:23.409151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.409161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.647 [2024-11-15 11:32:23.409171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.647 [2024-11-15 11:32:23.409183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.647 [2024-11-15 11:32:23.409193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.647 [2024-11-15 11:32:23.409204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.647 [2024-11-15 11:32:23.409214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.647 [2024-11-15 11:32:23.409224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.647 [2024-11-15 11:32:23.409234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.647 [2024-11-15 11:32:23.409243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.647 [2024-11-15 11:32:23.409253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.647 [2024-11-15 11:32:23.409265] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.647 [2024-11-15 11:32:23.409278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.647 [2024-11-15 11:32:23.409301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:40.647 [2024-11-15 11:32:23.409311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:40.647 [2024-11-15 11:32:23.409322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:40.647 [2024-11-15 11:32:23.409332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:40.647 [2024-11-15 11:32:23.409342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:40.647 [2024-11-15 11:32:23.409352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:40.647 [2024-11-15 11:32:23.409362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:40.647 [2024-11-15 11:32:23.409372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:40.647 [2024-11-15 11:32:23.409383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:40.647 [2024-11-15 11:32:23.409437] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.647 [2024-11-15 11:32:23.409475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.647 [2024-11-15 11:32:23.409499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.647 [2024-11-15 11:32:23.409509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.647 [2024-11-15 11:32:23.409522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.647 [2024-11-15 11:32:23.409534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.409545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.647 [2024-11-15 11:32:23.409555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:20:40.647 [2024-11-15 11:32:23.409565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.445747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.445821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.647 [2024-11-15 11:32:23.445839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.116 ms 00:20:40.647 [2024-11-15 11:32:23.445850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.445965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.445980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.647 [2024-11-15 11:32:23.445992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:40.647 [2024-11-15 11:32:23.446002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.494415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.494487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.647 [2024-11-15 11:32:23.494505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.300 ms 00:20:40.647 [2024-11-15 11:32:23.494516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.494581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.494598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.647 [2024-11-15 11:32:23.494622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:40.647 [2024-11-15 11:32:23.494633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.495289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.495318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.647 [2024-11-15 11:32:23.495332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:20:40.647 [2024-11-15 11:32:23.495344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.495509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.495528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.647 [2024-11-15 11:32:23.495540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:20:40.647 [2024-11-15 11:32:23.495562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.647 [2024-11-15 11:32:23.513790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.647 [2024-11-15 11:32:23.513846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.647 [2024-11-15 11:32:23.513871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.199 ms 00:20:40.648 [2024-11-15 11:32:23.513882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.648 [2024-11-15 11:32:23.530082] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:40.648 [2024-11-15 11:32:23.530158] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:40.648 [2024-11-15 11:32:23.530177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.648 [2024-11-15 11:32:23.530189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:40.648 [2024-11-15 11:32:23.530204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.157 ms 00:20:40.648 [2024-11-15 11:32:23.530215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.648 [2024-11-15 11:32:23.557809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.648 [2024-11-15 11:32:23.557879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:40.648 [2024-11-15 11:32:23.557896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.547 ms 00:20:40.648 [2024-11-15 11:32:23.557907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.648 [2024-11-15 11:32:23.571450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.648 [2024-11-15 11:32:23.571522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:40.648 [2024-11-15 11:32:23.571538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.497 ms 00:20:40.648 [2024-11-15 11:32:23.571548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.648 [2024-11-15 11:32:23.584908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.648 [2024-11-15 11:32:23.584964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:40.648 [2024-11-15 11:32:23.584979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.318 ms 00:20:40.648 [2024-11-15 11:32:23.584990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.648 [2024-11-15 11:32:23.585907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.648 [2024-11-15 11:32:23.585975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:40.648 [2024-11-15 11:32:23.585990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:20:40.648 [2024-11-15 11:32:23.586001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.655147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.655232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:40.908 [2024-11-15 11:32:23.655269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.085 ms 00:20:40.908 [2024-11-15 11:32:23.655295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.666544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.908 [2024-11-15 11:32:23.669737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.669792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.908 [2024-11-15 11:32:23.669825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.374 ms 00:20:40.908 [2024-11-15 11:32:23.669836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.669956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.669978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:40.908 [2024-11-15 11:32:23.669991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:40.908 [2024-11-15 11:32:23.670018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.670143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.670171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:40.908 [2024-11-15 11:32:23.670184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:40.908 [2024-11-15 11:32:23.670196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.670228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.670243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:40.908 [2024-11-15 11:32:23.670254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:40.908 [2024-11-15 11:32:23.670266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.670319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:40.908 [2024-11-15 11:32:23.670338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.670358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:40.908 [2024-11-15 11:32:23.670371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:40.908 [2024-11-15 11:32:23.670382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.697842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.697907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:40.908 [2024-11-15 11:32:23.697940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.435 ms 00:20:40.908 [2024-11-15 11:32:23.697951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.698065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.908 [2024-11-15 11:32:23.698085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:40.908 [2024-11-15 11:32:23.698098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:40.908 [2024-11-15 11:32:23.698108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.908 [2024-11-15 11:32:23.699729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.912 ms, result 0 00:20:41.869  [2024-11-15T11:32:25.754Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-15T11:32:27.125Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-15T11:32:28.060Z] Copying: 69/1024 [MB] (24 MBps) [2024-11-15T11:32:28.997Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-15T11:32:29.932Z] Copying: 115/1024 [MB] (22 MBps) [2024-11-15T11:32:30.866Z] Copying: 138/1024 [MB] (22 MBps) [2024-11-15T11:32:31.801Z] Copying: 162/1024 [MB] (24 MBps) [2024-11-15T11:32:32.734Z] Copying: 187/1024 [MB] (24 MBps) [2024-11-15T11:32:34.145Z] Copying: 211/1024 [MB] (23 MBps) [2024-11-15T11:32:34.727Z] Copying: 235/1024 [MB] (24 MBps) [2024-11-15T11:32:36.104Z] Copying: 258/1024 [MB] (23 MBps) [2024-11-15T11:32:37.041Z] Copying: 281/1024 [MB] (23 MBps) [2024-11-15T11:32:37.978Z] Copying: 305/1024 [MB] (23 MBps) [2024-11-15T11:32:38.914Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-15T11:32:39.850Z] Copying: 351/1024 [MB] (23 MBps) [2024-11-15T11:32:40.786Z] Copying: 375/1024 [MB] (23 MBps) [2024-11-15T11:32:41.721Z] Copying: 398/1024 [MB] (23 MBps) [2024-11-15T11:32:43.098Z] Copying: 422/1024 [MB] (23 MBps) [2024-11-15T11:32:43.740Z] Copying: 445/1024 [MB] (23 MBps) [2024-11-15T11:32:45.116Z] Copying: 469/1024 [MB] (23 MBps) [2024-11-15T11:32:46.053Z] Copying: 491/1024 [MB] (22 MBps) [2024-11-15T11:32:46.987Z] Copying: 514/1024 [MB] (23 MBps) [2024-11-15T11:32:47.923Z] Copying: 538/1024 [MB] (23 MBps) [2024-11-15T11:32:48.860Z] Copying: 562/1024 [MB] (23 MBps) [2024-11-15T11:32:49.797Z] Copying: 585/1024 [MB] (23 MBps) [2024-11-15T11:32:50.733Z] Copying: 609/1024 [MB] (23 MBps) [2024-11-15T11:32:52.117Z] Copying: 633/1024 [MB] (23 MBps) [2024-11-15T11:32:53.062Z] Copying: 656/1024 [MB] (23 MBps) [2024-11-15T11:32:54.020Z] Copying: 680/1024 [MB] (23 MBps) [2024-11-15T11:32:54.956Z] Copying: 704/1024 [MB] (23 MBps) [2024-11-15T11:32:55.890Z] Copying: 728/1024 [MB] (23 MBps) [2024-11-15T11:32:56.826Z] Copying: 752/1024 [MB] (23 MBps) [2024-11-15T11:32:57.763Z] Copying: 775/1024 [MB] (23 MBps) [2024-11-15T11:32:59.140Z] Copying: 798/1024 [MB] (23 MBps) [2024-11-15T11:33:00.076Z] Copying: 821/1024 [MB] (23 MBps) [2024-11-15T11:33:01.012Z] Copying: 844/1024 [MB] (22 MBps) [2024-11-15T11:33:01.949Z] Copying: 867/1024 [MB] (23 MBps) [2024-11-15T11:33:02.885Z] Copying: 891/1024 [MB] (23 MBps) [2024-11-15T11:33:03.820Z] Copying: 915/1024 [MB] (24 MBps) [2024-11-15T11:33:04.890Z] Copying: 938/1024 [MB] (23 MBps) [2024-11-15T11:33:05.825Z] Copying: 961/1024 [MB] (23 MBps) [2024-11-15T11:33:06.761Z] Copying: 984/1024 [MB] (23 MBps) [2024-11-15T11:33:07.700Z] Copying: 1008/1024 [MB] (23 MBps) [2024-11-15T11:33:07.700Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:33:07.360776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.360824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:24.751 [2024-11-15 11:33:07.360858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.751 [2024-11-15 11:33:07.360869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.360894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:24.751 [2024-11-15 11:33:07.364040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.364082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:24.751 [2024-11-15 11:33:07.364099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.125 ms 00:21:24.751 [2024-11-15 11:33:07.364134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.366023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.366116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:24.751 [2024-11-15 11:33:07.366131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:21:24.751 [2024-11-15 11:33:07.366141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.381745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.381810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:24.751 [2024-11-15 11:33:07.381842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.584 ms 00:21:24.751 [2024-11-15 11:33:07.381852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.387167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.387215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:24.751 [2024-11-15 11:33:07.387243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.264 ms 00:21:24.751 [2024-11-15 11:33:07.387253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.412734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.412773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:24.751 [2024-11-15 11:33:07.412803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.434 ms 00:21:24.751 [2024-11-15 11:33:07.412813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.427913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.427949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:24.751 [2024-11-15 11:33:07.427979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.061 ms 00:21:24.751 [2024-11-15 11:33:07.428005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.428161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.428184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:24.751 [2024-11-15 11:33:07.428208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:24.751 [2024-11-15 11:33:07.428233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.452950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.452989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:24.751 [2024-11-15 11:33:07.453018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.684 ms 00:21:24.751 [2024-11-15 11:33:07.453027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.477232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.477281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:24.751 [2024-11-15 11:33:07.477330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.159 ms 00:21:24.751 [2024-11-15 11:33:07.477340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.501065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.501125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:24.751 [2024-11-15 11:33:07.501155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.687 ms 00:21:24.751 [2024-11-15 11:33:07.501166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.524989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.751 [2024-11-15 11:33:07.525056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:24.751 [2024-11-15 11:33:07.525093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.751 ms 00:21:24.751 [2024-11-15 11:33:07.525103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.751 [2024-11-15 11:33:07.525141] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:24.751 [2024-11-15 11:33:07.525166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:24.751 [2024-11-15 11:33:07.525260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.525996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:24.752 [2024-11-15 11:33:07.526284] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:24.753 [2024-11-15 11:33:07.526308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf6a3513-021e-47da-a6ae-9434b328e950 00:21:24.753 [2024-11-15 11:33:07.526327] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:24.753 [2024-11-15 11:33:07.526337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:24.753 [2024-11-15 11:33:07.526346] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:24.753 [2024-11-15 11:33:07.526356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:24.753 [2024-11-15 11:33:07.526366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:24.753 [2024-11-15 11:33:07.526390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:24.753 [2024-11-15 11:33:07.526399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:24.753 [2024-11-15 11:33:07.526438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:24.753 [2024-11-15 11:33:07.526447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:24.753 [2024-11-15 11:33:07.526457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.753 [2024-11-15 11:33:07.526468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:24.753 [2024-11-15 11:33:07.526478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:21:24.753 [2024-11-15 11:33:07.526488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.540450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.753 [2024-11-15 11:33:07.540486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:24.753 [2024-11-15 11:33:07.540515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.941 ms 00:21:24.753 [2024-11-15 11:33:07.540525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.540979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.753 [2024-11-15 11:33:07.541006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:24.753 [2024-11-15 11:33:07.541019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:21:24.753 [2024-11-15 11:33:07.541055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.577286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.753 [2024-11-15 11:33:07.577353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.753 [2024-11-15 11:33:07.577390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.753 [2024-11-15 11:33:07.577402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.577469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.753 [2024-11-15 11:33:07.577483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.753 [2024-11-15 11:33:07.577493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.753 [2024-11-15 11:33:07.577503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.577599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.753 [2024-11-15 11:33:07.577632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.753 [2024-11-15 11:33:07.577644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.753 [2024-11-15 11:33:07.577654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.577674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.753 [2024-11-15 11:33:07.577688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.753 [2024-11-15 11:33:07.577698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.753 [2024-11-15 11:33:07.577708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.753 [2024-11-15 11:33:07.661577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.753 [2024-11-15 11:33:07.661631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.753 [2024-11-15 11:33:07.661663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.753 [2024-11-15 11:33:07.661673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:25.012 [2024-11-15 11:33:07.730218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:25.012 [2024-11-15 11:33:07.730361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:25.012 [2024-11-15 11:33:07.730456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:25.012 [2024-11-15 11:33:07.730637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:25.012 [2024-11-15 11:33:07.730721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:25.012 [2024-11-15 11:33:07.730840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.730902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:25.012 [2024-11-15 11:33:07.730918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:25.012 [2024-11-15 11:33:07.730929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:25.012 [2024-11-15 11:33:07.730939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.012 [2024-11-15 11:33:07.731147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.287 ms, result 0 00:21:25.948 00:21:25.948 00:21:25.948 11:33:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:25.948 [2024-11-15 11:33:08.721110] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:21:25.948 [2024-11-15 11:33:08.721304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77062 ] 00:21:25.948 [2024-11-15 11:33:08.893649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.207 [2024-11-15 11:33:08.990680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.466 [2024-11-15 11:33:09.293593] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.466 [2024-11-15 11:33:09.293688] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.726 [2024-11-15 11:33:09.453260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.453324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:26.726 [2024-11-15 11:33:09.453364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:26.726 [2024-11-15 11:33:09.453376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.453449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.453466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.726 [2024-11-15 11:33:09.453481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:26.726 [2024-11-15 11:33:09.453491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.453518] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:26.726 [2024-11-15 11:33:09.454342] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:26.726 [2024-11-15 11:33:09.454378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.454391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.726 [2024-11-15 11:33:09.454402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:21:26.726 [2024-11-15 11:33:09.454413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.456272] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:26.726 [2024-11-15 11:33:09.469969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.470012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:26.726 [2024-11-15 11:33:09.470052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.698 ms 00:21:26.726 [2024-11-15 11:33:09.470066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.470134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.470151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:26.726 [2024-11-15 11:33:09.470163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:26.726 [2024-11-15 11:33:09.470173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.478493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.478534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.726 [2024-11-15 11:33:09.478563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.256 ms 00:21:26.726 [2024-11-15 11:33:09.478589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.478673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.478690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.726 [2024-11-15 11:33:09.478702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:26.726 [2024-11-15 11:33:09.478712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.478811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.478829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:26.726 [2024-11-15 11:33:09.478841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:26.726 [2024-11-15 11:33:09.478852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.478888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.726 [2024-11-15 11:33:09.483216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.483265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.726 [2024-11-15 11:33:09.483294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:21:26.726 [2024-11-15 11:33:09.483310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.483344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.483358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:26.726 [2024-11-15 11:33:09.483369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:26.726 [2024-11-15 11:33:09.483379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.483422] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:26.726 [2024-11-15 11:33:09.483450] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:26.726 [2024-11-15 11:33:09.483487] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:26.726 [2024-11-15 11:33:09.483524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:26.726 [2024-11-15 11:33:09.483621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:26.726 [2024-11-15 11:33:09.483636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:26.726 [2024-11-15 11:33:09.483650] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:26.726 [2024-11-15 11:33:09.483676] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:26.726 [2024-11-15 11:33:09.483691] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:26.726 [2024-11-15 11:33:09.483702] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:26.726 [2024-11-15 11:33:09.483713] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:26.726 [2024-11-15 11:33:09.483723] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:26.726 [2024-11-15 11:33:09.483739] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:26.726 [2024-11-15 11:33:09.483751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.483761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:26.726 [2024-11-15 11:33:09.483773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:21:26.726 [2024-11-15 11:33:09.483783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.483865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.726 [2024-11-15 11:33:09.483879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:26.726 [2024-11-15 11:33:09.483890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:26.726 [2024-11-15 11:33:09.483901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.726 [2024-11-15 11:33:09.484012] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:26.726 [2024-11-15 11:33:09.484043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:26.726 [2024-11-15 11:33:09.484058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.726 [2024-11-15 11:33:09.484069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.726 [2024-11-15 11:33:09.484079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:26.726 [2024-11-15 11:33:09.484090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:26.726 [2024-11-15 11:33:09.484100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:26.726 [2024-11-15 11:33:09.484109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:26.726 [2024-11-15 11:33:09.484120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:26.726 [2024-11-15 11:33:09.484129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.726 [2024-11-15 11:33:09.484139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:26.726 [2024-11-15 11:33:09.484149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:26.726 [2024-11-15 11:33:09.484159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.726 [2024-11-15 11:33:09.484168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:26.726 [2024-11-15 11:33:09.484178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:26.726 [2024-11-15 11:33:09.484201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.726 [2024-11-15 11:33:09.484213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:26.726 [2024-11-15 11:33:09.484224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:26.727 [2024-11-15 11:33:09.484268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:26.727 [2024-11-15 11:33:09.484313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:26.727 [2024-11-15 11:33:09.484342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:26.727 [2024-11-15 11:33:09.484371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:26.727 [2024-11-15 11:33:09.484402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.727 [2024-11-15 11:33:09.484421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:26.727 [2024-11-15 11:33:09.484431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:26.727 [2024-11-15 11:33:09.484441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.727 [2024-11-15 11:33:09.484452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:26.727 [2024-11-15 11:33:09.484462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:26.727 [2024-11-15 11:33:09.484472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:26.727 [2024-11-15 11:33:09.484492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:26.727 [2024-11-15 11:33:09.484501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484511] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:26.727 [2024-11-15 11:33:09.484522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:26.727 [2024-11-15 11:33:09.484532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.727 [2024-11-15 11:33:09.484553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:26.727 [2024-11-15 11:33:09.484565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:26.727 [2024-11-15 11:33:09.484575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:26.727 [2024-11-15 11:33:09.484586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:26.727 [2024-11-15 11:33:09.484596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:26.727 [2024-11-15 11:33:09.484606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:26.727 [2024-11-15 11:33:09.484618] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:26.727 [2024-11-15 11:33:09.484631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:26.727 [2024-11-15 11:33:09.484654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:26.727 [2024-11-15 11:33:09.484664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:26.727 [2024-11-15 11:33:09.484675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:26.727 [2024-11-15 11:33:09.484685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:26.727 [2024-11-15 11:33:09.484695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:26.727 [2024-11-15 11:33:09.484706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:26.727 [2024-11-15 11:33:09.484717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:26.727 [2024-11-15 11:33:09.484727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:26.727 [2024-11-15 11:33:09.484738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:26.727 [2024-11-15 11:33:09.484791] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:26.727 [2024-11-15 11:33:09.484808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:26.727 [2024-11-15 11:33:09.484831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:26.727 [2024-11-15 11:33:09.484842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:26.727 [2024-11-15 11:33:09.484852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:26.727 [2024-11-15 11:33:09.484864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.484875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:26.727 [2024-11-15 11:33:09.484886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:21:26.727 [2024-11-15 11:33:09.484897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.520343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.520390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.727 [2024-11-15 11:33:09.520424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.386 ms 00:21:26.727 [2024-11-15 11:33:09.520435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.520545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.520560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:26.727 [2024-11-15 11:33:09.520573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:26.727 [2024-11-15 11:33:09.520583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.567391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.567458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:26.727 [2024-11-15 11:33:09.567493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.697 ms 00:21:26.727 [2024-11-15 11:33:09.567504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.567570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.567587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:26.727 [2024-11-15 11:33:09.567605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:26.727 [2024-11-15 11:33:09.567616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.568290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.568324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:26.727 [2024-11-15 11:33:09.568338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:21:26.727 [2024-11-15 11:33:09.568349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.568524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.568542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.727 [2024-11-15 11:33:09.568555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:21:26.727 [2024-11-15 11:33:09.568573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.585534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.585591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.727 [2024-11-15 11:33:09.585628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.935 ms 00:21:26.727 [2024-11-15 11:33:09.585640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.600013] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:26.727 [2024-11-15 11:33:09.600081] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:26.727 [2024-11-15 11:33:09.600115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.600127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:26.727 [2024-11-15 11:33:09.600139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.355 ms 00:21:26.727 [2024-11-15 11:33:09.600149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.624365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.727 [2024-11-15 11:33:09.624422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:26.727 [2024-11-15 11:33:09.624454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.173 ms 00:21:26.727 [2024-11-15 11:33:09.624465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.727 [2024-11-15 11:33:09.637326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.728 [2024-11-15 11:33:09.637386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:26.728 [2024-11-15 11:33:09.637418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.818 ms 00:21:26.728 [2024-11-15 11:33:09.637443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.728 [2024-11-15 11:33:09.650549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.728 [2024-11-15 11:33:09.650599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:26.728 [2024-11-15 11:33:09.650629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.065 ms 00:21:26.728 [2024-11-15 11:33:09.650639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.728 [2024-11-15 11:33:09.651451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.728 [2024-11-15 11:33:09.651517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:26.728 [2024-11-15 11:33:09.651547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:21:26.728 [2024-11-15 11:33:09.651562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.717301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.717380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:26.987 [2024-11-15 11:33:09.717436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.714 ms 00:21:26.987 [2024-11-15 11:33:09.717448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.727178] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:26.987 [2024-11-15 11:33:09.729357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.729393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:26.987 [2024-11-15 11:33:09.729408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.852 ms 00:21:26.987 [2024-11-15 11:33:09.729435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.729525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.729543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:26.987 [2024-11-15 11:33:09.729556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:26.987 [2024-11-15 11:33:09.729570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.729715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.729739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:26.987 [2024-11-15 11:33:09.729751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:26.987 [2024-11-15 11:33:09.729762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.729797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.729811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:26.987 [2024-11-15 11:33:09.729823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:26.987 [2024-11-15 11:33:09.729833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.729881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:26.987 [2024-11-15 11:33:09.729897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.729907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:26.987 [2024-11-15 11:33:09.729919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:26.987 [2024-11-15 11:33:09.729929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.754825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.754867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:26.987 [2024-11-15 11:33:09.754898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.873 ms 00:21:26.987 [2024-11-15 11:33:09.754915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.754992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.987 [2024-11-15 11:33:09.755010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:26.987 [2024-11-15 11:33:09.755022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:26.987 [2024-11-15 11:33:09.755052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.987 [2024-11-15 11:33:09.756646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.767 ms, result 0 00:21:28.364  [2024-11-15T11:33:12.249Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-15T11:33:13.185Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-15T11:33:14.121Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-15T11:33:15.059Z] Copying: 89/1024 [MB] (22 MBps) [2024-11-15T11:33:16.019Z] Copying: 112/1024 [MB] (23 MBps) [2024-11-15T11:33:16.955Z] Copying: 135/1024 [MB] (22 MBps) [2024-11-15T11:33:18.331Z] Copying: 158/1024 [MB] (22 MBps) [2024-11-15T11:33:19.267Z] Copying: 180/1024 [MB] (22 MBps) [2024-11-15T11:33:20.202Z] Copying: 203/1024 [MB] (23 MBps) [2024-11-15T11:33:21.137Z] Copying: 226/1024 [MB] (22 MBps) [2024-11-15T11:33:22.087Z] Copying: 249/1024 [MB] (23 MBps) [2024-11-15T11:33:23.037Z] Copying: 271/1024 [MB] (22 MBps) [2024-11-15T11:33:23.973Z] Copying: 295/1024 [MB] (23 MBps) [2024-11-15T11:33:25.352Z] Copying: 317/1024 [MB] (22 MBps) [2024-11-15T11:33:26.287Z] Copying: 340/1024 [MB] (22 MBps) [2024-11-15T11:33:27.267Z] Copying: 364/1024 [MB] (23 MBps) [2024-11-15T11:33:28.201Z] Copying: 387/1024 [MB] (23 MBps) [2024-11-15T11:33:29.138Z] Copying: 410/1024 [MB] (23 MBps) [2024-11-15T11:33:30.074Z] Copying: 433/1024 [MB] (23 MBps) [2024-11-15T11:33:31.012Z] Copying: 457/1024 [MB] (23 MBps) [2024-11-15T11:33:31.949Z] Copying: 480/1024 [MB] (23 MBps) [2024-11-15T11:33:33.324Z] Copying: 504/1024 [MB] (23 MBps) [2024-11-15T11:33:34.259Z] Copying: 527/1024 [MB] (23 MBps) [2024-11-15T11:33:35.194Z] Copying: 550/1024 [MB] (23 MBps) [2024-11-15T11:33:36.129Z] Copying: 574/1024 [MB] (23 MBps) [2024-11-15T11:33:37.065Z] Copying: 597/1024 [MB] (23 MBps) [2024-11-15T11:33:38.000Z] Copying: 621/1024 [MB] (23 MBps) [2024-11-15T11:33:38.933Z] Copying: 644/1024 [MB] (23 MBps) [2024-11-15T11:33:40.307Z] Copying: 667/1024 [MB] (22 MBps) [2024-11-15T11:33:41.242Z] Copying: 690/1024 [MB] (22 MBps) [2024-11-15T11:33:42.178Z] Copying: 713/1024 [MB] (22 MBps) [2024-11-15T11:33:43.113Z] Copying: 736/1024 [MB] (22 MBps) [2024-11-15T11:33:44.048Z] Copying: 758/1024 [MB] (22 MBps) [2024-11-15T11:33:44.980Z] Copying: 780/1024 [MB] (22 MBps) [2024-11-15T11:33:46.353Z] Copying: 803/1024 [MB] (22 MBps) [2024-11-15T11:33:47.286Z] Copying: 825/1024 [MB] (21 MBps) [2024-11-15T11:33:48.255Z] Copying: 847/1024 [MB] (21 MBps) [2024-11-15T11:33:49.190Z] Copying: 868/1024 [MB] (21 MBps) [2024-11-15T11:33:50.125Z] Copying: 890/1024 [MB] (21 MBps) [2024-11-15T11:33:51.063Z] Copying: 913/1024 [MB] (22 MBps) [2024-11-15T11:33:51.998Z] Copying: 935/1024 [MB] (22 MBps) [2024-11-15T11:33:52.933Z] Copying: 957/1024 [MB] (22 MBps) [2024-11-15T11:33:54.309Z] Copying: 980/1024 [MB] (22 MBps) [2024-11-15T11:33:54.874Z] Copying: 1002/1024 [MB] (22 MBps) [2024-11-15T11:33:55.132Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-15 11:33:54.963144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:54.963247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.183 [2024-11-15 11:33:54.963269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:12.183 [2024-11-15 11:33:54.963281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:54.963326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.183 [2024-11-15 11:33:54.967626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:54.967660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.183 [2024-11-15 11:33:54.967683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.271 ms 00:22:12.183 [2024-11-15 11:33:54.967695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:54.968080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:54.968138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.183 [2024-11-15 11:33:54.968154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:22:12.183 [2024-11-15 11:33:54.968165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:54.971720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:54.971759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:12.183 [2024-11-15 11:33:54.971771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.534 ms 00:22:12.183 [2024-11-15 11:33:54.971782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:54.978226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:54.978271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:12.183 [2024-11-15 11:33:54.978290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.416 ms 00:22:12.183 [2024-11-15 11:33:54.978301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.009037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.009113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.183 [2024-11-15 11:33:55.009147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.652 ms 00:22:12.183 [2024-11-15 11:33:55.009158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.026701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.026750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.183 [2024-11-15 11:33:55.026780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.500 ms 00:22:12.183 [2024-11-15 11:33:55.026792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.026948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.026977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.183 [2024-11-15 11:33:55.026990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:22:12.183 [2024-11-15 11:33:55.027001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.057530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.057579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:12.183 [2024-11-15 11:33:55.057594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.479 ms 00:22:12.183 [2024-11-15 11:33:55.057604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.086262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.086321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:12.183 [2024-11-15 11:33:55.086335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.619 ms 00:22:12.183 [2024-11-15 11:33:55.086345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.183 [2024-11-15 11:33:55.115637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.183 [2024-11-15 11:33:55.115684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:12.183 [2024-11-15 11:33:55.115698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.253 ms 00:22:12.183 [2024-11-15 11:33:55.115708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.443 [2024-11-15 11:33:55.144596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.443 [2024-11-15 11:33:55.144646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:12.443 [2024-11-15 11:33:55.144675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.809 ms 00:22:12.443 [2024-11-15 11:33:55.144701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.443 [2024-11-15 11:33:55.144740] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:12.443 [2024-11-15 11:33:55.144770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.144992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:12.443 [2024-11-15 11:33:55.145455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.145994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:12.444 [2024-11-15 11:33:55.146154] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:12.444 [2024-11-15 11:33:55.146187] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf6a3513-021e-47da-a6ae-9434b328e950 00:22:12.444 [2024-11-15 11:33:55.146201] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:12.444 [2024-11-15 11:33:55.146212] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:12.444 [2024-11-15 11:33:55.146223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:12.444 [2024-11-15 11:33:55.146246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:12.444 [2024-11-15 11:33:55.146257] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:12.444 [2024-11-15 11:33:55.146269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:12.444 [2024-11-15 11:33:55.146292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:12.444 [2024-11-15 11:33:55.146303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:12.444 [2024-11-15 11:33:55.146314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:12.444 [2024-11-15 11:33:55.146326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.444 [2024-11-15 11:33:55.146337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:12.444 [2024-11-15 11:33:55.146350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.587 ms 00:22:12.444 [2024-11-15 11:33:55.146361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.163057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.444 [2024-11-15 11:33:55.163134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:12.444 [2024-11-15 11:33:55.163150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.646 ms 00:22:12.444 [2024-11-15 11:33:55.163160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.163644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.444 [2024-11-15 11:33:55.163672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:12.444 [2024-11-15 11:33:55.163686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:22:12.444 [2024-11-15 11:33:55.163704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.207239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.444 [2024-11-15 11:33:55.207292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.444 [2024-11-15 11:33:55.207322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.444 [2024-11-15 11:33:55.207334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.207394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.444 [2024-11-15 11:33:55.207408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.444 [2024-11-15 11:33:55.207436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.444 [2024-11-15 11:33:55.207468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.207576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.444 [2024-11-15 11:33:55.207611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.444 [2024-11-15 11:33:55.207631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.444 [2024-11-15 11:33:55.207644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.207682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.444 [2024-11-15 11:33:55.207696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.444 [2024-11-15 11:33:55.207713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.444 [2024-11-15 11:33:55.207724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.444 [2024-11-15 11:33:55.312649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.445 [2024-11-15 11:33:55.312737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.445 [2024-11-15 11:33:55.312756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.445 [2024-11-15 11:33:55.312767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.400667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.400794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.703 [2024-11-15 11:33:55.400828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.400846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.400941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.400957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.703 [2024-11-15 11:33:55.400968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.400978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.401033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.703 [2024-11-15 11:33:55.401045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.401055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.401298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.703 [2024-11-15 11:33:55.401311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.401323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.401422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:12.703 [2024-11-15 11:33:55.401436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.401462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.401556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.703 [2024-11-15 11:33:55.401586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.401598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:12.703 [2024-11-15 11:33:55.401691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.703 [2024-11-15 11:33:55.401708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:12.703 [2024-11-15 11:33:55.401720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.703 [2024-11-15 11:33:55.401888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 438.710 ms, result 0 00:22:13.638 00:22:13.638 00:22:13.638 11:33:56 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:15.538 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:15.538 11:33:58 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:15.538 [2024-11-15 11:33:58.483813] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:22:15.538 [2024-11-15 11:33:58.483996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77557 ] 00:22:15.796 [2024-11-15 11:33:58.665691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.054 [2024-11-15 11:33:58.808709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.312 [2024-11-15 11:33:59.149216] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.312 [2024-11-15 11:33:59.149308] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.571 [2024-11-15 11:33:59.311702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.311749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.571 [2024-11-15 11:33:59.311788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:16.571 [2024-11-15 11:33:59.311799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.311855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.311871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.571 [2024-11-15 11:33:59.311885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:16.571 [2024-11-15 11:33:59.311895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.311921] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.571 [2024-11-15 11:33:59.312860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.571 [2024-11-15 11:33:59.312932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.312944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.571 [2024-11-15 11:33:59.312971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:22:16.571 [2024-11-15 11:33:59.312981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.315155] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.571 [2024-11-15 11:33:59.330547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.330586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.571 [2024-11-15 11:33:59.330617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.394 ms 00:22:16.571 [2024-11-15 11:33:59.330627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.330693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.330710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.571 [2024-11-15 11:33:59.330721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:16.571 [2024-11-15 11:33:59.330731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.340118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.340169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.571 [2024-11-15 11:33:59.340197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.291 ms 00:22:16.571 [2024-11-15 11:33:59.340213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.340349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.340367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.571 [2024-11-15 11:33:59.340379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:22:16.571 [2024-11-15 11:33:59.340390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.571 [2024-11-15 11:33:59.340457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.571 [2024-11-15 11:33:59.340490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.571 [2024-11-15 11:33:59.340503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:16.572 [2024-11-15 11:33:59.340528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.572 [2024-11-15 11:33:59.340566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.572 [2024-11-15 11:33:59.345426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.572 [2024-11-15 11:33:59.345491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.572 [2024-11-15 11:33:59.345520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.874 ms 00:22:16.572 [2024-11-15 11:33:59.345535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.572 [2024-11-15 11:33:59.345575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.572 [2024-11-15 11:33:59.345589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.572 [2024-11-15 11:33:59.345601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:16.572 [2024-11-15 11:33:59.345611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.572 [2024-11-15 11:33:59.345673] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.572 [2024-11-15 11:33:59.345703] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:16.572 [2024-11-15 11:33:59.345757] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.572 [2024-11-15 11:33:59.345784] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:16.572 [2024-11-15 11:33:59.345934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:16.572 [2024-11-15 11:33:59.345954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.572 [2024-11-15 11:33:59.345968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:16.572 [2024-11-15 11:33:59.345982] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.572 [2024-11-15 11:33:59.345995] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346007] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:16.572 [2024-11-15 11:33:59.346018] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.572 [2024-11-15 11:33:59.346028] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:16.572 [2024-11-15 11:33:59.346044] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:16.572 [2024-11-15 11:33:59.346088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.572 [2024-11-15 11:33:59.346108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.572 [2024-11-15 11:33:59.346121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:22:16.572 [2024-11-15 11:33:59.346131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.572 [2024-11-15 11:33:59.346253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.572 [2024-11-15 11:33:59.346268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.572 [2024-11-15 11:33:59.346279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:22:16.572 [2024-11-15 11:33:59.346289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.572 [2024-11-15 11:33:59.346400] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.572 [2024-11-15 11:33:59.346427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.572 [2024-11-15 11:33:59.346440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.572 [2024-11-15 11:33:59.346471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.572 [2024-11-15 11:33:59.346499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.572 [2024-11-15 11:33:59.346518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.572 [2024-11-15 11:33:59.346527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:16.572 [2024-11-15 11:33:59.346536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.572 [2024-11-15 11:33:59.346545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.572 [2024-11-15 11:33:59.346557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:16.572 [2024-11-15 11:33:59.346595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.572 [2024-11-15 11:33:59.346630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.572 [2024-11-15 11:33:59.346659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.572 [2024-11-15 11:33:59.346687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.572 [2024-11-15 11:33:59.346747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.572 [2024-11-15 11:33:59.346787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.572 [2024-11-15 11:33:59.346817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.572 [2024-11-15 11:33:59.346837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.572 [2024-11-15 11:33:59.346847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:16.572 [2024-11-15 11:33:59.346857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.572 [2024-11-15 11:33:59.346867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:16.572 [2024-11-15 11:33:59.346878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:16.572 [2024-11-15 11:33:59.346887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:16.572 [2024-11-15 11:33:59.346908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:16.572 [2024-11-15 11:33:59.346918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.572 [2024-11-15 11:33:59.346939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.572 [2024-11-15 11:33:59.346950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.572 [2024-11-15 11:33:59.346961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.572 [2024-11-15 11:33:59.346973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.572 [2024-11-15 11:33:59.346983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.572 [2024-11-15 11:33:59.346994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.572 [2024-11-15 11:33:59.347004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.572 [2024-11-15 11:33:59.347014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.572 [2024-11-15 11:33:59.347024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.572 [2024-11-15 11:33:59.347036] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.572 [2024-11-15 11:33:59.347050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.572 [2024-11-15 11:33:59.347062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:16.572 [2024-11-15 11:33:59.347089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:16.572 [2024-11-15 11:33:59.347100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:16.572 [2024-11-15 11:33:59.347172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:16.572 [2024-11-15 11:33:59.347183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:16.572 [2024-11-15 11:33:59.347194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:16.572 [2024-11-15 11:33:59.347204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:16.572 [2024-11-15 11:33:59.347215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:16.572 [2024-11-15 11:33:59.347225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:16.572 [2024-11-15 11:33:59.347236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:16.572 [2024-11-15 11:33:59.347245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:16.572 [2024-11-15 11:33:59.347255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:16.572 [2024-11-15 11:33:59.347265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:16.573 [2024-11-15 11:33:59.347276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:16.573 [2024-11-15 11:33:59.347286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.573 [2024-11-15 11:33:59.347304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.573 [2024-11-15 11:33:59.347315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.573 [2024-11-15 11:33:59.347326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.573 [2024-11-15 11:33:59.347336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.573 [2024-11-15 11:33:59.347347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.573 [2024-11-15 11:33:59.347358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.347369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.573 [2024-11-15 11:33:59.347380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:22:16.573 [2024-11-15 11:33:59.347391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.386496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.386563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.573 [2024-11-15 11:33:59.386611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.045 ms 00:22:16.573 [2024-11-15 11:33:59.386623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.386784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.386821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.573 [2024-11-15 11:33:59.386850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:16.573 [2024-11-15 11:33:59.386861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.440772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.440856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.573 [2024-11-15 11:33:59.440890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.827 ms 00:22:16.573 [2024-11-15 11:33:59.440902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.440962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.440980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.573 [2024-11-15 11:33:59.441013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:16.573 [2024-11-15 11:33:59.441036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.441793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.441836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.573 [2024-11-15 11:33:59.441849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:22:16.573 [2024-11-15 11:33:59.441860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.442014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.442048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.573 [2024-11-15 11:33:59.442071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:22:16.573 [2024-11-15 11:33:59.442091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.461266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.461310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.573 [2024-11-15 11:33:59.461346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.148 ms 00:22:16.573 [2024-11-15 11:33:59.461358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.477289] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:16.573 [2024-11-15 11:33:59.477349] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.573 [2024-11-15 11:33:59.477398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.477425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.573 [2024-11-15 11:33:59.477436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.861 ms 00:22:16.573 [2024-11-15 11:33:59.477446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.573 [2024-11-15 11:33:59.504351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.573 [2024-11-15 11:33:59.504393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.573 [2024-11-15 11:33:59.504445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.857 ms 00:22:16.573 [2024-11-15 11:33:59.504456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.518589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.518659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.832 [2024-11-15 11:33:59.518688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.059 ms 00:22:16.832 [2024-11-15 11:33:59.518699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.532497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.532561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.832 [2024-11-15 11:33:59.532590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.728 ms 00:22:16.832 [2024-11-15 11:33:59.532599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.533520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.533589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.832 [2024-11-15 11:33:59.533619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:22:16.832 [2024-11-15 11:33:59.533635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.613080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.613168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.832 [2024-11-15 11:33:59.613196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.419 ms 00:22:16.832 [2024-11-15 11:33:59.613208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.625040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:16.832 [2024-11-15 11:33:59.628362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.628393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.832 [2024-11-15 11:33:59.628424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.088 ms 00:22:16.832 [2024-11-15 11:33:59.628434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.628584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.628618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.832 [2024-11-15 11:33:59.628631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:16.832 [2024-11-15 11:33:59.628645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.628762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.628790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.832 [2024-11-15 11:33:59.628803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:16.832 [2024-11-15 11:33:59.628813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.628848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.628863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.832 [2024-11-15 11:33:59.628874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:16.832 [2024-11-15 11:33:59.628884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.628930] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.832 [2024-11-15 11:33:59.628946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.628957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.832 [2024-11-15 11:33:59.628968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:16.832 [2024-11-15 11:33:59.628979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.832 [2024-11-15 11:33:59.659601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.832 [2024-11-15 11:33:59.659641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.833 [2024-11-15 11:33:59.659673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.599 ms 00:22:16.833 [2024-11-15 11:33:59.659691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.833 [2024-11-15 11:33:59.659878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.833 [2024-11-15 11:33:59.659899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.833 [2024-11-15 11:33:59.659911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:16.833 [2024-11-15 11:33:59.659921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.833 [2024-11-15 11:33:59.661614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.239 ms, result 0 00:22:17.768  [2024-11-15T11:34:02.092Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-15T11:34:03.028Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-15T11:34:04.045Z] Copying: 67/1024 [MB] (22 MBps) [2024-11-15T11:34:04.980Z] Copying: 89/1024 [MB] (22 MBps) [2024-11-15T11:34:05.915Z] Copying: 111/1024 [MB] (21 MBps) [2024-11-15T11:34:06.849Z] Copying: 133/1024 [MB] (21 MBps) [2024-11-15T11:34:07.784Z] Copying: 155/1024 [MB] (21 MBps) [2024-11-15T11:34:08.721Z] Copying: 177/1024 [MB] (22 MBps) [2024-11-15T11:34:10.099Z] Copying: 199/1024 [MB] (22 MBps) [2024-11-15T11:34:11.046Z] Copying: 222/1024 [MB] (22 MBps) [2024-11-15T11:34:12.000Z] Copying: 244/1024 [MB] (21 MBps) [2024-11-15T11:34:12.936Z] Copying: 266/1024 [MB] (22 MBps) [2024-11-15T11:34:13.871Z] Copying: 288/1024 [MB] (22 MBps) [2024-11-15T11:34:14.809Z] Copying: 310/1024 [MB] (21 MBps) [2024-11-15T11:34:15.746Z] Copying: 332/1024 [MB] (22 MBps) [2024-11-15T11:34:16.683Z] Copying: 355/1024 [MB] (22 MBps) [2024-11-15T11:34:18.059Z] Copying: 378/1024 [MB] (22 MBps) [2024-11-15T11:34:18.994Z] Copying: 400/1024 [MB] (22 MBps) [2024-11-15T11:34:19.927Z] Copying: 422/1024 [MB] (22 MBps) [2024-11-15T11:34:20.863Z] Copying: 445/1024 [MB] (22 MBps) [2024-11-15T11:34:21.873Z] Copying: 467/1024 [MB] (22 MBps) [2024-11-15T11:34:22.808Z] Copying: 489/1024 [MB] (22 MBps) [2024-11-15T11:34:23.744Z] Copying: 511/1024 [MB] (21 MBps) [2024-11-15T11:34:24.678Z] Copying: 534/1024 [MB] (22 MBps) [2024-11-15T11:34:26.053Z] Copying: 556/1024 [MB] (22 MBps) [2024-11-15T11:34:26.988Z] Copying: 578/1024 [MB] (22 MBps) [2024-11-15T11:34:27.924Z] Copying: 600/1024 [MB] (21 MBps) [2024-11-15T11:34:28.860Z] Copying: 623/1024 [MB] (22 MBps) [2024-11-15T11:34:29.795Z] Copying: 645/1024 [MB] (22 MBps) [2024-11-15T11:34:30.731Z] Copying: 667/1024 [MB] (22 MBps) [2024-11-15T11:34:32.105Z] Copying: 690/1024 [MB] (22 MBps) [2024-11-15T11:34:33.040Z] Copying: 712/1024 [MB] (22 MBps) [2024-11-15T11:34:33.976Z] Copying: 734/1024 [MB] (21 MBps) [2024-11-15T11:34:34.914Z] Copying: 756/1024 [MB] (22 MBps) [2024-11-15T11:34:35.849Z] Copying: 779/1024 [MB] (22 MBps) [2024-11-15T11:34:36.784Z] Copying: 801/1024 [MB] (22 MBps) [2024-11-15T11:34:37.718Z] Copying: 823/1024 [MB] (22 MBps) [2024-11-15T11:34:39.093Z] Copying: 845/1024 [MB] (22 MBps) [2024-11-15T11:34:40.026Z] Copying: 868/1024 [MB] (22 MBps) [2024-11-15T11:34:40.962Z] Copying: 890/1024 [MB] (22 MBps) [2024-11-15T11:34:41.897Z] Copying: 912/1024 [MB] (22 MBps) [2024-11-15T11:34:42.837Z] Copying: 934/1024 [MB] (22 MBps) [2024-11-15T11:34:43.772Z] Copying: 957/1024 [MB] (22 MBps) [2024-11-15T11:34:44.709Z] Copying: 979/1024 [MB] (22 MBps) [2024-11-15T11:34:46.083Z] Copying: 1001/1024 [MB] (22 MBps) [2024-11-15T11:34:47.020Z] Copying: 1023/1024 [MB] (21 MBps) [2024-11-15T11:34:47.020Z] Copying: 1048484/1048576 [kB] (856 kBps) [2024-11-15T11:34:47.020Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-15 11:34:46.800189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.800372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:04.071 [2024-11-15 11:34:46.800396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:04.071 [2024-11-15 11:34:46.800422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.804089] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:04.071 [2024-11-15 11:34:46.811103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.811298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:04.071 [2024-11-15 11:34:46.811411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.710 ms 00:23:04.071 [2024-11-15 11:34:46.811456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.823833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.824086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:04.071 [2024-11-15 11:34:46.824236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.149 ms 00:23:04.071 [2024-11-15 11:34:46.824266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.846481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.846539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:04.071 [2024-11-15 11:34:46.846586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.189 ms 00:23:04.071 [2024-11-15 11:34:46.846597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.852620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.852652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:04.071 [2024-11-15 11:34:46.852680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.956 ms 00:23:04.071 [2024-11-15 11:34:46.852690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.881026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.881145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:04.071 [2024-11-15 11:34:46.881178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.266 ms 00:23:04.071 [2024-11-15 11:34:46.881189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:46.897922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:46.897979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:04.071 [2024-11-15 11:34:46.898010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.690 ms 00:23:04.071 [2024-11-15 11:34:46.898021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.071 [2024-11-15 11:34:47.018360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.071 [2024-11-15 11:34:47.018431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:04.071 [2024-11-15 11:34:47.018479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.280 ms 00:23:04.071 [2024-11-15 11:34:47.018490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.331 [2024-11-15 11:34:47.046412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.331 [2024-11-15 11:34:47.046467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:04.331 [2024-11-15 11:34:47.046497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.902 ms 00:23:04.331 [2024-11-15 11:34:47.046507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.331 [2024-11-15 11:34:47.073922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.331 [2024-11-15 11:34:47.073989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:04.331 [2024-11-15 11:34:47.074019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.377 ms 00:23:04.331 [2024-11-15 11:34:47.074030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.331 [2024-11-15 11:34:47.100988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.331 [2024-11-15 11:34:47.101052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:04.331 [2024-11-15 11:34:47.101066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.912 ms 00:23:04.331 [2024-11-15 11:34:47.101076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.331 [2024-11-15 11:34:47.127638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.331 [2024-11-15 11:34:47.127676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:04.331 [2024-11-15 11:34:47.127705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.457 ms 00:23:04.331 [2024-11-15 11:34:47.127715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.331 [2024-11-15 11:34:47.127766] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:04.331 [2024-11-15 11:34:47.127825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 119296 / 261120 wr_cnt: 1 state: open 00:23:04.331 [2024-11-15 11:34:47.127839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.127996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:04.331 [2024-11-15 11:34:47.128397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.128997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:04.332 [2024-11-15 11:34:47.129224] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:04.332 [2024-11-15 11:34:47.129236] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf6a3513-021e-47da-a6ae-9434b328e950 00:23:04.332 [2024-11-15 11:34:47.129248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 119296 00:23:04.332 [2024-11-15 11:34:47.129258] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 120256 00:23:04.332 [2024-11-15 11:34:47.129268] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 119296 00:23:04.332 [2024-11-15 11:34:47.129279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:23:04.332 [2024-11-15 11:34:47.129290] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:04.332 [2024-11-15 11:34:47.129306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:04.332 [2024-11-15 11:34:47.129328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:04.332 [2024-11-15 11:34:47.129338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:04.332 [2024-11-15 11:34:47.129348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:04.332 [2024-11-15 11:34:47.129359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.332 [2024-11-15 11:34:47.129370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:04.332 [2024-11-15 11:34:47.129382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.594 ms 00:23:04.332 [2024-11-15 11:34:47.129392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.146706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.332 [2024-11-15 11:34:47.146760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:04.332 [2024-11-15 11:34:47.146790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.231 ms 00:23:04.332 [2024-11-15 11:34:47.146808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.147362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.332 [2024-11-15 11:34:47.147419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:04.332 [2024-11-15 11:34:47.147449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:23:04.332 [2024-11-15 11:34:47.147461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.187973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.332 [2024-11-15 11:34:47.188100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:04.332 [2024-11-15 11:34:47.188116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.332 [2024-11-15 11:34:47.188126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.188181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.332 [2024-11-15 11:34:47.188195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:04.332 [2024-11-15 11:34:47.188206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.332 [2024-11-15 11:34:47.188216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.188299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.332 [2024-11-15 11:34:47.188351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:04.332 [2024-11-15 11:34:47.188385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.332 [2024-11-15 11:34:47.188396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.332 [2024-11-15 11:34:47.188420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.332 [2024-11-15 11:34:47.188432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:04.332 [2024-11-15 11:34:47.188459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.332 [2024-11-15 11:34:47.188470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.282802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.282895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:04.592 [2024-11-15 11:34:47.282965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.282976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.352414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.352494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:04.592 [2024-11-15 11:34:47.352525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.352535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.352626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.352642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.592 [2024-11-15 11:34:47.352663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.352685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.352726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.352740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.592 [2024-11-15 11:34:47.352766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.352792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.352921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.352939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.592 [2024-11-15 11:34:47.352951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.352961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.353025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.353069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:04.592 [2024-11-15 11:34:47.353090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.353111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.353172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.353196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.592 [2024-11-15 11:34:47.353208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.353218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.353305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:04.592 [2024-11-15 11:34:47.353320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.592 [2024-11-15 11:34:47.353332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:04.592 [2024-11-15 11:34:47.353342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.592 [2024-11-15 11:34:47.353525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 555.609 ms, result 0 00:23:05.968 00:23:05.968 00:23:05.968 11:34:48 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:05.968 [2024-11-15 11:34:48.822821] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:23:05.968 [2024-11-15 11:34:48.823011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78050 ] 00:23:06.227 [2024-11-15 11:34:49.003745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.227 [2024-11-15 11:34:49.110572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.795 [2024-11-15 11:34:49.461829] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:06.795 [2024-11-15 11:34:49.461973] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:06.795 [2024-11-15 11:34:49.623189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.795 [2024-11-15 11:34:49.623254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:06.795 [2024-11-15 11:34:49.623295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:06.795 [2024-11-15 11:34:49.623305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.795 [2024-11-15 11:34:49.623363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.795 [2024-11-15 11:34:49.623379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.795 [2024-11-15 11:34:49.623395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:06.795 [2024-11-15 11:34:49.623404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.795 [2024-11-15 11:34:49.623447] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:06.795 [2024-11-15 11:34:49.624423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:06.795 [2024-11-15 11:34:49.624490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.795 [2024-11-15 11:34:49.624503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.795 [2024-11-15 11:34:49.624515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:23:06.795 [2024-11-15 11:34:49.624526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.795 [2024-11-15 11:34:49.626486] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:06.795 [2024-11-15 11:34:49.641618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.641659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:06.796 [2024-11-15 11:34:49.641699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.134 ms 00:23:06.796 [2024-11-15 11:34:49.641709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.641794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.641831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:06.796 [2024-11-15 11:34:49.641843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:06.796 [2024-11-15 11:34:49.641853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.651257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.651320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.796 [2024-11-15 11:34:49.651349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.287 ms 00:23:06.796 [2024-11-15 11:34:49.651382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.651466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.651482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.796 [2024-11-15 11:34:49.651493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:06.796 [2024-11-15 11:34:49.651502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.651552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.651601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:06.796 [2024-11-15 11:34:49.651613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:06.796 [2024-11-15 11:34:49.651640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.651675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:06.796 [2024-11-15 11:34:49.656324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.656373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.796 [2024-11-15 11:34:49.656387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:23:06.796 [2024-11-15 11:34:49.656403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.656474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.656488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:06.796 [2024-11-15 11:34:49.656499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:06.796 [2024-11-15 11:34:49.656509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.656568] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:06.796 [2024-11-15 11:34:49.656596] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:06.796 [2024-11-15 11:34:49.656682] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:06.796 [2024-11-15 11:34:49.656707] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:06.796 [2024-11-15 11:34:49.656810] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:06.796 [2024-11-15 11:34:49.656825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:06.796 [2024-11-15 11:34:49.656838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:06.796 [2024-11-15 11:34:49.656852] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:06.796 [2024-11-15 11:34:49.656879] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:06.796 [2024-11-15 11:34:49.656890] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:06.796 [2024-11-15 11:34:49.656900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:06.796 [2024-11-15 11:34:49.656910] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:06.796 [2024-11-15 11:34:49.656926] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:06.796 [2024-11-15 11:34:49.656937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.656947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:06.796 [2024-11-15 11:34:49.656958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:23:06.796 [2024-11-15 11:34:49.656968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.657055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.796 [2024-11-15 11:34:49.657069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:06.796 [2024-11-15 11:34:49.657131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:06.796 [2024-11-15 11:34:49.657147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.796 [2024-11-15 11:34:49.657262] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:06.796 [2024-11-15 11:34:49.657282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:06.796 [2024-11-15 11:34:49.657295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:06.796 [2024-11-15 11:34:49.657327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:06.796 [2024-11-15 11:34:49.657357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.796 [2024-11-15 11:34:49.657409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:06.796 [2024-11-15 11:34:49.657434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:06.796 [2024-11-15 11:34:49.657456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:06.796 [2024-11-15 11:34:49.657466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:06.796 [2024-11-15 11:34:49.657477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:06.796 [2024-11-15 11:34:49.657514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:06.796 [2024-11-15 11:34:49.657546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:06.796 [2024-11-15 11:34:49.657578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:06.796 [2024-11-15 11:34:49.657609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:06.796 [2024-11-15 11:34:49.657640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:06.796 [2024-11-15 11:34:49.657672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:06.796 [2024-11-15 11:34:49.657703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.796 [2024-11-15 11:34:49.657724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:06.796 [2024-11-15 11:34:49.657734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:06.796 [2024-11-15 11:34:49.657744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:06.796 [2024-11-15 11:34:49.657755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:06.796 [2024-11-15 11:34:49.657765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:06.796 [2024-11-15 11:34:49.657776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:06.796 [2024-11-15 11:34:49.657796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:06.796 [2024-11-15 11:34:49.657806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657831] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:06.796 [2024-11-15 11:34:49.657857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:06.796 [2024-11-15 11:34:49.657883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:06.796 [2024-11-15 11:34:49.657903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:06.796 [2024-11-15 11:34:49.657913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:06.796 [2024-11-15 11:34:49.657922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:06.796 [2024-11-15 11:34:49.657932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:06.796 [2024-11-15 11:34:49.657941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:06.796 [2024-11-15 11:34:49.657951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:06.796 [2024-11-15 11:34:49.657962] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:06.796 [2024-11-15 11:34:49.657975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.796 [2024-11-15 11:34:49.657987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:06.797 [2024-11-15 11:34:49.657997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:06.797 [2024-11-15 11:34:49.658007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:06.797 [2024-11-15 11:34:49.658019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:06.797 [2024-11-15 11:34:49.658030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:06.797 [2024-11-15 11:34:49.658040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:06.797 [2024-11-15 11:34:49.658051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:06.797 [2024-11-15 11:34:49.658061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:06.797 [2024-11-15 11:34:49.658076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:06.797 [2024-11-15 11:34:49.658087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:06.797 [2024-11-15 11:34:49.658154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:06.797 [2024-11-15 11:34:49.658172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:06.797 [2024-11-15 11:34:49.658194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:06.797 [2024-11-15 11:34:49.658205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:06.797 [2024-11-15 11:34:49.658215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:06.797 [2024-11-15 11:34:49.658226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.797 [2024-11-15 11:34:49.658262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:06.797 [2024-11-15 11:34:49.658274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:23:06.797 [2024-11-15 11:34:49.658292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.797 [2024-11-15 11:34:49.696439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.797 [2024-11-15 11:34:49.696529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.797 [2024-11-15 11:34:49.696551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.087 ms 00:23:06.797 [2024-11-15 11:34:49.696562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.797 [2024-11-15 11:34:49.696687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.797 [2024-11-15 11:34:49.696702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:06.797 [2024-11-15 11:34:49.696713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:06.797 [2024-11-15 11:34:49.696722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.745275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.745342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.056 [2024-11-15 11:34:49.745360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.477 ms 00:23:07.056 [2024-11-15 11:34:49.745371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.745446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.745462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.056 [2024-11-15 11:34:49.745481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:07.056 [2024-11-15 11:34:49.745491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.746237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.746282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.056 [2024-11-15 11:34:49.746300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:23:07.056 [2024-11-15 11:34:49.746328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.746543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.746585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.056 [2024-11-15 11:34:49.746614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:23:07.056 [2024-11-15 11:34:49.746633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.765223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.765281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.056 [2024-11-15 11:34:49.765302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:23:07.056 [2024-11-15 11:34:49.765313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.780697] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:07.056 [2024-11-15 11:34:49.780741] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:07.056 [2024-11-15 11:34:49.780781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.780808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:07.056 [2024-11-15 11:34:49.780851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.316 ms 00:23:07.056 [2024-11-15 11:34:49.780862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.806490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.806549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:07.056 [2024-11-15 11:34:49.806579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.584 ms 00:23:07.056 [2024-11-15 11:34:49.806590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.820393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.820462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:07.056 [2024-11-15 11:34:49.820477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.761 ms 00:23:07.056 [2024-11-15 11:34:49.820487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.834081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.834127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:07.056 [2024-11-15 11:34:49.834157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.540 ms 00:23:07.056 [2024-11-15 11:34:49.834167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.835003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.835079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:07.056 [2024-11-15 11:34:49.835093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:23:07.056 [2024-11-15 11:34:49.835109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.904639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.904709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:07.056 [2024-11-15 11:34:49.904750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.490 ms 00:23:07.056 [2024-11-15 11:34:49.904761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.915447] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:07.056 [2024-11-15 11:34:49.917778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.917830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:07.056 [2024-11-15 11:34:49.917862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:23:07.056 [2024-11-15 11:34:49.917873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.918009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.056 [2024-11-15 11:34:49.918026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:07.056 [2024-11-15 11:34:49.918042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:07.056 [2024-11-15 11:34:49.918052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.056 [2024-11-15 11:34:49.919989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.057 [2024-11-15 11:34:49.920019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:07.057 [2024-11-15 11:34:49.920072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:23:07.057 [2024-11-15 11:34:49.920084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.057 [2024-11-15 11:34:49.920134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.057 [2024-11-15 11:34:49.920164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:07.057 [2024-11-15 11:34:49.920176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:07.057 [2024-11-15 11:34:49.920192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.057 [2024-11-15 11:34:49.920231] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:07.057 [2024-11-15 11:34:49.920247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.057 [2024-11-15 11:34:49.920257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:07.057 [2024-11-15 11:34:49.920268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:07.057 [2024-11-15 11:34:49.920278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.057 [2024-11-15 11:34:49.950360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.057 [2024-11-15 11:34:49.950431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:07.057 [2024-11-15 11:34:49.950483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.056 ms 00:23:07.057 [2024-11-15 11:34:49.950494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.057 [2024-11-15 11:34:49.950578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.057 [2024-11-15 11:34:49.950596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:07.057 [2024-11-15 11:34:49.950607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:07.057 [2024-11-15 11:34:49.950617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.057 [2024-11-15 11:34:49.952761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 328.385 ms, result 0 00:23:08.432  [2024-11-15T11:34:52.317Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-15T11:34:53.252Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-15T11:34:54.186Z] Copying: 63/1024 [MB] (21 MBps) [2024-11-15T11:34:55.591Z] Copying: 85/1024 [MB] (21 MBps) [2024-11-15T11:34:56.174Z] Copying: 107/1024 [MB] (22 MBps) [2024-11-15T11:34:57.548Z] Copying: 129/1024 [MB] (22 MBps) [2024-11-15T11:34:58.483Z] Copying: 152/1024 [MB] (22 MBps) [2024-11-15T11:34:59.418Z] Copying: 174/1024 [MB] (22 MBps) [2024-11-15T11:35:00.353Z] Copying: 197/1024 [MB] (22 MBps) [2024-11-15T11:35:01.289Z] Copying: 220/1024 [MB] (22 MBps) [2024-11-15T11:35:02.225Z] Copying: 243/1024 [MB] (22 MBps) [2024-11-15T11:35:03.161Z] Copying: 265/1024 [MB] (22 MBps) [2024-11-15T11:35:04.537Z] Copying: 288/1024 [MB] (22 MBps) [2024-11-15T11:35:05.472Z] Copying: 311/1024 [MB] (23 MBps) [2024-11-15T11:35:06.406Z] Copying: 334/1024 [MB] (23 MBps) [2024-11-15T11:35:07.341Z] Copying: 357/1024 [MB] (22 MBps) [2024-11-15T11:35:08.276Z] Copying: 380/1024 [MB] (22 MBps) [2024-11-15T11:35:09.210Z] Copying: 402/1024 [MB] (22 MBps) [2024-11-15T11:35:10.585Z] Copying: 425/1024 [MB] (22 MBps) [2024-11-15T11:35:11.519Z] Copying: 447/1024 [MB] (22 MBps) [2024-11-15T11:35:12.454Z] Copying: 469/1024 [MB] (22 MBps) [2024-11-15T11:35:13.389Z] Copying: 491/1024 [MB] (21 MBps) [2024-11-15T11:35:14.325Z] Copying: 512/1024 [MB] (21 MBps) [2024-11-15T11:35:15.261Z] Copying: 535/1024 [MB] (22 MBps) [2024-11-15T11:35:16.214Z] Copying: 557/1024 [MB] (22 MBps) [2024-11-15T11:35:17.172Z] Copying: 579/1024 [MB] (22 MBps) [2024-11-15T11:35:18.545Z] Copying: 602/1024 [MB] (22 MBps) [2024-11-15T11:35:19.482Z] Copying: 624/1024 [MB] (22 MBps) [2024-11-15T11:35:20.417Z] Copying: 646/1024 [MB] (21 MBps) [2024-11-15T11:35:21.353Z] Copying: 668/1024 [MB] (21 MBps) [2024-11-15T11:35:22.288Z] Copying: 690/1024 [MB] (22 MBps) [2024-11-15T11:35:23.225Z] Copying: 712/1024 [MB] (21 MBps) [2024-11-15T11:35:24.161Z] Copying: 734/1024 [MB] (22 MBps) [2024-11-15T11:35:25.537Z] Copying: 756/1024 [MB] (22 MBps) [2024-11-15T11:35:26.473Z] Copying: 779/1024 [MB] (23 MBps) [2024-11-15T11:35:27.410Z] Copying: 802/1024 [MB] (23 MBps) [2024-11-15T11:35:28.346Z] Copying: 826/1024 [MB] (23 MBps) [2024-11-15T11:35:29.283Z] Copying: 849/1024 [MB] (23 MBps) [2024-11-15T11:35:30.219Z] Copying: 873/1024 [MB] (23 MBps) [2024-11-15T11:35:31.595Z] Copying: 897/1024 [MB] (23 MBps) [2024-11-15T11:35:32.162Z] Copying: 919/1024 [MB] (22 MBps) [2024-11-15T11:35:33.539Z] Copying: 942/1024 [MB] (22 MBps) [2024-11-15T11:35:34.475Z] Copying: 964/1024 [MB] (22 MBps) [2024-11-15T11:35:35.409Z] Copying: 988/1024 [MB] (23 MBps) [2024-11-15T11:35:35.666Z] Copying: 1012/1024 [MB] (23 MBps) [2024-11-15T11:35:35.923Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-15 11:35:35.827975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.974 [2024-11-15 11:35:35.828365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:52.974 [2024-11-15 11:35:35.828406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.974 [2024-11-15 11:35:35.828422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.974 [2024-11-15 11:35:35.828461] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.974 [2024-11-15 11:35:35.832429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.974 [2024-11-15 11:35:35.832474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:52.974 [2024-11-15 11:35:35.832503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.943 ms 00:23:52.974 [2024-11-15 11:35:35.832513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.974 [2024-11-15 11:35:35.832756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.974 [2024-11-15 11:35:35.832773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:52.974 [2024-11-15 11:35:35.832785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:23:52.974 [2024-11-15 11:35:35.832800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.974 [2024-11-15 11:35:35.837604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.974 [2024-11-15 11:35:35.837659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:52.974 [2024-11-15 11:35:35.837705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.785 ms 00:23:52.974 [2024-11-15 11:35:35.837732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.974 [2024-11-15 11:35:35.843148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.974 [2024-11-15 11:35:35.843176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:52.975 [2024-11-15 11:35:35.843188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:23:52.975 [2024-11-15 11:35:35.843204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.975 [2024-11-15 11:35:35.868375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.975 [2024-11-15 11:35:35.868410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:52.975 [2024-11-15 11:35:35.868424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.130 ms 00:23:52.975 [2024-11-15 11:35:35.868434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.975 [2024-11-15 11:35:35.883539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.975 [2024-11-15 11:35:35.883589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:52.975 [2024-11-15 11:35:35.883604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.068 ms 00:23:52.975 [2024-11-15 11:35:35.883614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:35.998263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.233 [2024-11-15 11:35:35.998326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.233 [2024-11-15 11:35:35.998344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.608 ms 00:23:53.233 [2024-11-15 11:35:35.998370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:36.023197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.233 [2024-11-15 11:35:36.023230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.233 [2024-11-15 11:35:36.023244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.803 ms 00:23:53.233 [2024-11-15 11:35:36.023253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:36.048680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.233 [2024-11-15 11:35:36.048728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.233 [2024-11-15 11:35:36.048754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.392 ms 00:23:53.233 [2024-11-15 11:35:36.048764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:36.076723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.233 [2024-11-15 11:35:36.076753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.233 [2024-11-15 11:35:36.076766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.922 ms 00:23:53.233 [2024-11-15 11:35:36.076776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:36.101210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.233 [2024-11-15 11:35:36.101264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.233 [2024-11-15 11:35:36.101279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.362 ms 00:23:53.233 [2024-11-15 11:35:36.101289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.233 [2024-11-15 11:35:36.101327] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.233 [2024-11-15 11:35:36.101347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:53.233 [2024-11-15 11:35:36.101360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.233 [2024-11-15 11:35:36.101975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.101986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.101997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.234 [2024-11-15 11:35:36.102572] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.234 [2024-11-15 11:35:36.102582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf6a3513-021e-47da-a6ae-9434b328e950 00:23:53.234 [2024-11-15 11:35:36.102593] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:53.234 [2024-11-15 11:35:36.102602] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 00:23:53.234 [2024-11-15 11:35:36.102612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 00:23:53.234 [2024-11-15 11:35:36.102623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 00:23:53.234 [2024-11-15 11:35:36.102639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.234 [2024-11-15 11:35:36.102650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.234 [2024-11-15 11:35:36.102660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.234 [2024-11-15 11:35:36.102679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.234 [2024-11-15 11:35:36.102689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.234 [2024-11-15 11:35:36.102699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.234 [2024-11-15 11:35:36.102710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.234 [2024-11-15 11:35:36.102720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:23:53.234 [2024-11-15 11:35:36.102731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.117070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.234 [2024-11-15 11:35:36.117137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.234 [2024-11-15 11:35:36.117160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.311 ms 00:23:53.234 [2024-11-15 11:35:36.117170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.117609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.234 [2024-11-15 11:35:36.117628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.234 [2024-11-15 11:35:36.117640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:23:53.234 [2024-11-15 11:35:36.117650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.153193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.234 [2024-11-15 11:35:36.153246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.234 [2024-11-15 11:35:36.153259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.234 [2024-11-15 11:35:36.153270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.153323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.234 [2024-11-15 11:35:36.153336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.234 [2024-11-15 11:35:36.153346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.234 [2024-11-15 11:35:36.153355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.153454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.234 [2024-11-15 11:35:36.153475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.234 [2024-11-15 11:35:36.153486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.234 [2024-11-15 11:35:36.153495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.234 [2024-11-15 11:35:36.153513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.234 [2024-11-15 11:35:36.153524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.234 [2024-11-15 11:35:36.153534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.234 [2024-11-15 11:35:36.153543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.492 [2024-11-15 11:35:36.236989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.492 [2024-11-15 11:35:36.237058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.492 [2024-11-15 11:35:36.237075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.492 [2024-11-15 11:35:36.237093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.492 [2024-11-15 11:35:36.305184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.492 [2024-11-15 11:35:36.305229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.492 [2024-11-15 11:35:36.305244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.492 [2024-11-15 11:35:36.305255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.492 [2024-11-15 11:35:36.305325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.492 [2024-11-15 11:35:36.305340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.493 [2024-11-15 11:35:36.305350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.493 [2024-11-15 11:35:36.305444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.493 [2024-11-15 11:35:36.305454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.493 [2024-11-15 11:35:36.305587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.493 [2024-11-15 11:35:36.305598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.493 [2024-11-15 11:35:36.305668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.493 [2024-11-15 11:35:36.305679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.493 [2024-11-15 11:35:36.305740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.493 [2024-11-15 11:35:36.305750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.493 [2024-11-15 11:35:36.305841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.493 [2024-11-15 11:35:36.305851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.493 [2024-11-15 11:35:36.305860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.493 [2024-11-15 11:35:36.305993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 477.987 ms, result 0 00:23:54.426 00:23:54.426 00:23:54.426 11:35:37 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:56.369 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76361 00:23:56.369 11:35:38 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76361 ']' 00:23:56.369 11:35:38 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76361 00:23:56.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76361) - No such process 00:23:56.369 Process with pid 76361 is not found 00:23:56.369 11:35:38 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76361 is not found' 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:56.369 Remove shared memory files 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:56.369 11:35:38 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:56.369 11:35:39 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:56.369 11:35:39 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:56.369 11:35:39 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:56.369 00:23:56.369 real 3m36.820s 00:23:56.369 user 3m22.930s 00:23:56.369 sys 0m15.508s 00:23:56.369 11:35:39 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:56.369 11:35:39 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:56.369 ************************************ 00:23:56.369 END TEST ftl_restore 00:23:56.369 ************************************ 00:23:56.369 11:35:39 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:56.369 11:35:39 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:56.369 11:35:39 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:56.369 11:35:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:56.369 ************************************ 00:23:56.369 START TEST ftl_dirty_shutdown 00:23:56.369 ************************************ 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:56.369 * Looking for test storage... 00:23:56.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.369 --rc genhtml_branch_coverage=1 00:23:56.369 --rc genhtml_function_coverage=1 00:23:56.369 --rc genhtml_legend=1 00:23:56.369 --rc geninfo_all_blocks=1 00:23:56.369 --rc geninfo_unexecuted_blocks=1 00:23:56.369 00:23:56.369 ' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.369 --rc genhtml_branch_coverage=1 00:23:56.369 --rc genhtml_function_coverage=1 00:23:56.369 --rc genhtml_legend=1 00:23:56.369 --rc geninfo_all_blocks=1 00:23:56.369 --rc geninfo_unexecuted_blocks=1 00:23:56.369 00:23:56.369 ' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.369 --rc genhtml_branch_coverage=1 00:23:56.369 --rc genhtml_function_coverage=1 00:23:56.369 --rc genhtml_legend=1 00:23:56.369 --rc geninfo_all_blocks=1 00:23:56.369 --rc geninfo_unexecuted_blocks=1 00:23:56.369 00:23:56.369 ' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:56.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.369 --rc genhtml_branch_coverage=1 00:23:56.369 --rc genhtml_function_coverage=1 00:23:56.369 --rc genhtml_legend=1 00:23:56.369 --rc geninfo_all_blocks=1 00:23:56.369 --rc geninfo_unexecuted_blocks=1 00:23:56.369 00:23:56.369 ' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78617 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78617 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78617 ']' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:56.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:56.369 11:35:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:56.627 [2024-11-15 11:35:39.411379] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:23:56.628 [2024-11-15 11:35:39.411566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78617 ] 00:23:56.886 [2024-11-15 11:35:39.601711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.886 [2024-11-15 11:35:39.745409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:57.821 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:58.080 11:35:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:58.338 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:58.339 { 00:23:58.339 "name": "nvme0n1", 00:23:58.339 "aliases": [ 00:23:58.339 "905c8c22-1506-4a64-8308-f428511c63f0" 00:23:58.339 ], 00:23:58.339 "product_name": "NVMe disk", 00:23:58.339 "block_size": 4096, 00:23:58.339 "num_blocks": 1310720, 00:23:58.339 "uuid": "905c8c22-1506-4a64-8308-f428511c63f0", 00:23:58.339 "numa_id": -1, 00:23:58.339 "assigned_rate_limits": { 00:23:58.339 "rw_ios_per_sec": 0, 00:23:58.339 "rw_mbytes_per_sec": 0, 00:23:58.339 "r_mbytes_per_sec": 0, 00:23:58.339 "w_mbytes_per_sec": 0 00:23:58.339 }, 00:23:58.339 "claimed": true, 00:23:58.339 "claim_type": "read_many_write_one", 00:23:58.339 "zoned": false, 00:23:58.339 "supported_io_types": { 00:23:58.339 "read": true, 00:23:58.339 "write": true, 00:23:58.339 "unmap": true, 00:23:58.339 "flush": true, 00:23:58.339 "reset": true, 00:23:58.339 "nvme_admin": true, 00:23:58.339 "nvme_io": true, 00:23:58.339 "nvme_io_md": false, 00:23:58.339 "write_zeroes": true, 00:23:58.339 "zcopy": false, 00:23:58.339 "get_zone_info": false, 00:23:58.339 "zone_management": false, 00:23:58.339 "zone_append": false, 00:23:58.339 "compare": true, 00:23:58.339 "compare_and_write": false, 00:23:58.339 "abort": true, 00:23:58.339 "seek_hole": false, 00:23:58.339 "seek_data": false, 00:23:58.339 "copy": true, 00:23:58.339 "nvme_iov_md": false 00:23:58.339 }, 00:23:58.339 "driver_specific": { 00:23:58.339 "nvme": [ 00:23:58.339 { 00:23:58.339 "pci_address": "0000:00:11.0", 00:23:58.339 "trid": { 00:23:58.339 "trtype": "PCIe", 00:23:58.339 "traddr": "0000:00:11.0" 00:23:58.339 }, 00:23:58.339 "ctrlr_data": { 00:23:58.339 "cntlid": 0, 00:23:58.339 "vendor_id": "0x1b36", 00:23:58.339 "model_number": "QEMU NVMe Ctrl", 00:23:58.339 "serial_number": "12341", 00:23:58.339 "firmware_revision": "8.0.0", 00:23:58.339 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:58.339 "oacs": { 00:23:58.339 "security": 0, 00:23:58.339 "format": 1, 00:23:58.339 "firmware": 0, 00:23:58.339 "ns_manage": 1 00:23:58.339 }, 00:23:58.339 "multi_ctrlr": false, 00:23:58.339 "ana_reporting": false 00:23:58.339 }, 00:23:58.339 "vs": { 00:23:58.339 "nvme_version": "1.4" 00:23:58.339 }, 00:23:58.339 "ns_data": { 00:23:58.339 "id": 1, 00:23:58.339 "can_share": false 00:23:58.339 } 00:23:58.339 } 00:23:58.339 ], 00:23:58.339 "mp_policy": "active_passive" 00:23:58.339 } 00:23:58.339 } 00:23:58.339 ]' 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:58.339 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:58.599 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=7fa13ea1-f806-43d6-ba0a-c6457be6f4f3 00:23:58.599 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:58.599 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fa13ea1-f806-43d6-ba0a-c6457be6f4f3 00:23:58.858 11:35:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:59.117 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21 00:23:59.117 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:59.376 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50ab19a7-824a-49df-915c-28b91764cd3b 00:23:59.634 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:59.634 { 00:23:59.634 "name": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:23:59.634 "aliases": [ 00:23:59.634 "lvs/nvme0n1p0" 00:23:59.634 ], 00:23:59.634 "product_name": "Logical Volume", 00:23:59.634 "block_size": 4096, 00:23:59.634 "num_blocks": 26476544, 00:23:59.635 "uuid": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:23:59.635 "assigned_rate_limits": { 00:23:59.635 "rw_ios_per_sec": 0, 00:23:59.635 "rw_mbytes_per_sec": 0, 00:23:59.635 "r_mbytes_per_sec": 0, 00:23:59.635 "w_mbytes_per_sec": 0 00:23:59.635 }, 00:23:59.635 "claimed": false, 00:23:59.635 "zoned": false, 00:23:59.635 "supported_io_types": { 00:23:59.635 "read": true, 00:23:59.635 "write": true, 00:23:59.635 "unmap": true, 00:23:59.635 "flush": false, 00:23:59.635 "reset": true, 00:23:59.635 "nvme_admin": false, 00:23:59.635 "nvme_io": false, 00:23:59.635 "nvme_io_md": false, 00:23:59.635 "write_zeroes": true, 00:23:59.635 "zcopy": false, 00:23:59.635 "get_zone_info": false, 00:23:59.635 "zone_management": false, 00:23:59.635 "zone_append": false, 00:23:59.635 "compare": false, 00:23:59.635 "compare_and_write": false, 00:23:59.635 "abort": false, 00:23:59.635 "seek_hole": true, 00:23:59.635 "seek_data": true, 00:23:59.635 "copy": false, 00:23:59.635 "nvme_iov_md": false 00:23:59.635 }, 00:23:59.635 "driver_specific": { 00:23:59.635 "lvol": { 00:23:59.635 "lvol_store_uuid": "27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21", 00:23:59.635 "base_bdev": "nvme0n1", 00:23:59.635 "thin_provision": true, 00:23:59.635 "num_allocated_clusters": 0, 00:23:59.635 "snapshot": false, 00:23:59.635 "clone": false, 00:23:59.635 "esnap_clone": false 00:23:59.635 } 00:23:59.635 } 00:23:59.635 } 00:23:59.635 ]' 00:23:59.635 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:59.635 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:59.635 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:59.894 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:00.152 11:35:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.411 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:00.411 { 00:24:00.411 "name": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:24:00.411 "aliases": [ 00:24:00.411 "lvs/nvme0n1p0" 00:24:00.411 ], 00:24:00.411 "product_name": "Logical Volume", 00:24:00.411 "block_size": 4096, 00:24:00.412 "num_blocks": 26476544, 00:24:00.412 "uuid": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:24:00.412 "assigned_rate_limits": { 00:24:00.412 "rw_ios_per_sec": 0, 00:24:00.412 "rw_mbytes_per_sec": 0, 00:24:00.412 "r_mbytes_per_sec": 0, 00:24:00.412 "w_mbytes_per_sec": 0 00:24:00.412 }, 00:24:00.412 "claimed": false, 00:24:00.412 "zoned": false, 00:24:00.412 "supported_io_types": { 00:24:00.412 "read": true, 00:24:00.412 "write": true, 00:24:00.412 "unmap": true, 00:24:00.412 "flush": false, 00:24:00.412 "reset": true, 00:24:00.412 "nvme_admin": false, 00:24:00.412 "nvme_io": false, 00:24:00.412 "nvme_io_md": false, 00:24:00.412 "write_zeroes": true, 00:24:00.412 "zcopy": false, 00:24:00.412 "get_zone_info": false, 00:24:00.412 "zone_management": false, 00:24:00.412 "zone_append": false, 00:24:00.412 "compare": false, 00:24:00.412 "compare_and_write": false, 00:24:00.412 "abort": false, 00:24:00.412 "seek_hole": true, 00:24:00.412 "seek_data": true, 00:24:00.412 "copy": false, 00:24:00.412 "nvme_iov_md": false 00:24:00.412 }, 00:24:00.412 "driver_specific": { 00:24:00.412 "lvol": { 00:24:00.412 "lvol_store_uuid": "27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21", 00:24:00.412 "base_bdev": "nvme0n1", 00:24:00.412 "thin_provision": true, 00:24:00.412 "num_allocated_clusters": 0, 00:24:00.412 "snapshot": false, 00:24:00.412 "clone": false, 00:24:00.412 "esnap_clone": false 00:24:00.412 } 00:24:00.412 } 00:24:00.412 } 00:24:00.412 ]' 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:00.412 11:35:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:00.671 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50ab19a7-824a-49df-915c-28b91764cd3b 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:00.930 { 00:24:00.930 "name": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:24:00.930 "aliases": [ 00:24:00.930 "lvs/nvme0n1p0" 00:24:00.930 ], 00:24:00.930 "product_name": "Logical Volume", 00:24:00.930 "block_size": 4096, 00:24:00.930 "num_blocks": 26476544, 00:24:00.930 "uuid": "50ab19a7-824a-49df-915c-28b91764cd3b", 00:24:00.930 "assigned_rate_limits": { 00:24:00.930 "rw_ios_per_sec": 0, 00:24:00.930 "rw_mbytes_per_sec": 0, 00:24:00.930 "r_mbytes_per_sec": 0, 00:24:00.930 "w_mbytes_per_sec": 0 00:24:00.930 }, 00:24:00.930 "claimed": false, 00:24:00.930 "zoned": false, 00:24:00.930 "supported_io_types": { 00:24:00.930 "read": true, 00:24:00.930 "write": true, 00:24:00.930 "unmap": true, 00:24:00.930 "flush": false, 00:24:00.930 "reset": true, 00:24:00.930 "nvme_admin": false, 00:24:00.930 "nvme_io": false, 00:24:00.930 "nvme_io_md": false, 00:24:00.930 "write_zeroes": true, 00:24:00.930 "zcopy": false, 00:24:00.930 "get_zone_info": false, 00:24:00.930 "zone_management": false, 00:24:00.930 "zone_append": false, 00:24:00.930 "compare": false, 00:24:00.930 "compare_and_write": false, 00:24:00.930 "abort": false, 00:24:00.930 "seek_hole": true, 00:24:00.930 "seek_data": true, 00:24:00.930 "copy": false, 00:24:00.930 "nvme_iov_md": false 00:24:00.930 }, 00:24:00.930 "driver_specific": { 00:24:00.930 "lvol": { 00:24:00.930 "lvol_store_uuid": "27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21", 00:24:00.930 "base_bdev": "nvme0n1", 00:24:00.930 "thin_provision": true, 00:24:00.930 "num_allocated_clusters": 0, 00:24:00.930 "snapshot": false, 00:24:00.930 "clone": false, 00:24:00.930 "esnap_clone": false 00:24:00.930 } 00:24:00.930 } 00:24:00.930 } 00:24:00.930 ]' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 50ab19a7-824a-49df-915c-28b91764cd3b --l2p_dram_limit 10' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:00.930 11:35:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 50ab19a7-824a-49df-915c-28b91764cd3b --l2p_dram_limit 10 -c nvc0n1p0 00:24:01.190 [2024-11-15 11:35:44.062907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.190 [2024-11-15 11:35:44.062977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:01.190 [2024-11-15 11:35:44.063015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:01.190 [2024-11-15 11:35:44.063027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.190 [2024-11-15 11:35:44.063136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.190 [2024-11-15 11:35:44.063156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.190 [2024-11-15 11:35:44.063170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:01.190 [2024-11-15 11:35:44.063182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.190 [2024-11-15 11:35:44.063228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:01.190 [2024-11-15 11:35:44.064218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:01.190 [2024-11-15 11:35:44.064273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.190 [2024-11-15 11:35:44.064287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.190 [2024-11-15 11:35:44.064302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:24:01.190 [2024-11-15 11:35:44.064313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.190 [2024-11-15 11:35:44.064458] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6b15c40c-275b-44da-9182-71512f00bc3a 00:24:01.190 [2024-11-15 11:35:44.066292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.190 [2024-11-15 11:35:44.066352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:01.191 [2024-11-15 11:35:44.066368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:01.191 [2024-11-15 11:35:44.066381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.075619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.075666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.191 [2024-11-15 11:35:44.075696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.161 ms 00:24:01.191 [2024-11-15 11:35:44.075709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.075817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.075838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.191 [2024-11-15 11:35:44.075850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:01.191 [2024-11-15 11:35:44.075867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.075974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.075995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:01.191 [2024-11-15 11:35:44.076008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:01.191 [2024-11-15 11:35:44.076028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.076079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:01.191 [2024-11-15 11:35:44.080712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.080765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.191 [2024-11-15 11:35:44.080800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:24:01.191 [2024-11-15 11:35:44.080812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.080857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.080872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:01.191 [2024-11-15 11:35:44.080886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:01.191 [2024-11-15 11:35:44.080897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.080950] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:01.191 [2024-11-15 11:35:44.081148] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:01.191 [2024-11-15 11:35:44.081175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:01.191 [2024-11-15 11:35:44.081192] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:01.191 [2024-11-15 11:35:44.081208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:01.191 [2024-11-15 11:35:44.081249] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:01.191 [2024-11-15 11:35:44.081266] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:01.191 [2024-11-15 11:35:44.081278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:01.191 [2024-11-15 11:35:44.081292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.081304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:01.191 [2024-11-15 11:35:44.081318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:01.191 [2024-11-15 11:35:44.081340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.081435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.191 [2024-11-15 11:35:44.081450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:01.191 [2024-11-15 11:35:44.081464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:01.191 [2024-11-15 11:35:44.081476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.191 [2024-11-15 11:35:44.081586] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:01.191 [2024-11-15 11:35:44.081609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:01.191 [2024-11-15 11:35:44.081624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:01.191 [2024-11-15 11:35:44.081661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:01.191 [2024-11-15 11:35:44.081697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.191 [2024-11-15 11:35:44.081721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:01.191 [2024-11-15 11:35:44.081733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:01.191 [2024-11-15 11:35:44.081746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.191 [2024-11-15 11:35:44.081756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:01.191 [2024-11-15 11:35:44.081769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:01.191 [2024-11-15 11:35:44.081780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:01.191 [2024-11-15 11:35:44.081806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:01.191 [2024-11-15 11:35:44.081862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:01.191 [2024-11-15 11:35:44.081897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:01.191 [2024-11-15 11:35:44.081932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:01.191 [2024-11-15 11:35:44.081966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:01.191 [2024-11-15 11:35:44.081979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.191 [2024-11-15 11:35:44.081989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:01.191 [2024-11-15 11:35:44.082004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:01.191 [2024-11-15 11:35:44.082030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.191 [2024-11-15 11:35:44.082042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:01.191 [2024-11-15 11:35:44.082052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:01.191 [2024-11-15 11:35:44.082100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.191 [2024-11-15 11:35:44.082111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:01.191 [2024-11-15 11:35:44.082124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:01.191 [2024-11-15 11:35:44.082135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.082148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:01.191 [2024-11-15 11:35:44.082158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:01.191 [2024-11-15 11:35:44.082170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.082181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:01.191 [2024-11-15 11:35:44.082200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:01.191 [2024-11-15 11:35:44.082211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.191 [2024-11-15 11:35:44.082242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.191 [2024-11-15 11:35:44.082254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:01.191 [2024-11-15 11:35:44.082269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:01.191 [2024-11-15 11:35:44.082280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:01.191 [2024-11-15 11:35:44.082294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:01.191 [2024-11-15 11:35:44.082306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:01.191 [2024-11-15 11:35:44.082319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:01.191 [2024-11-15 11:35:44.082335] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:01.191 [2024-11-15 11:35:44.082352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.191 [2024-11-15 11:35:44.082368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:01.191 [2024-11-15 11:35:44.082381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:01.191 [2024-11-15 11:35:44.082393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:01.191 [2024-11-15 11:35:44.082420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:01.191 [2024-11-15 11:35:44.082431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:01.191 [2024-11-15 11:35:44.082444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:01.192 [2024-11-15 11:35:44.082455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:01.192 [2024-11-15 11:35:44.082467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:01.192 [2024-11-15 11:35:44.082478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:01.192 [2024-11-15 11:35:44.082493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:01.192 [2024-11-15 11:35:44.082556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:01.192 [2024-11-15 11:35:44.082571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:01.192 [2024-11-15 11:35:44.082602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:01.192 [2024-11-15 11:35:44.082613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:01.192 [2024-11-15 11:35:44.082641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:01.192 [2024-11-15 11:35:44.082653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.192 [2024-11-15 11:35:44.082681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:01.192 [2024-11-15 11:35:44.082693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.132 ms 00:24:01.192 [2024-11-15 11:35:44.082706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.192 [2024-11-15 11:35:44.082759] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:01.192 [2024-11-15 11:35:44.082780] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:04.478 [2024-11-15 11:35:47.054621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.054694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:04.478 [2024-11-15 11:35:47.054714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2971.873 ms 00:24:04.478 [2024-11-15 11:35:47.054728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.087940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.087996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.478 [2024-11-15 11:35:47.088015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.982 ms 00:24:04.478 [2024-11-15 11:35:47.088065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.088241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.088263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.478 [2024-11-15 11:35:47.088278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:24:04.478 [2024-11-15 11:35:47.088298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.125233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.125505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.478 [2024-11-15 11:35:47.125532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.885 ms 00:24:04.478 [2024-11-15 11:35:47.125547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.125590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.125613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.478 [2024-11-15 11:35:47.125625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:04.478 [2024-11-15 11:35:47.125639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.126302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.126340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.478 [2024-11-15 11:35:47.126355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:24:04.478 [2024-11-15 11:35:47.126368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.126525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.126544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.478 [2024-11-15 11:35:47.126559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:24:04.478 [2024-11-15 11:35:47.126574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.144505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.144692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.478 [2024-11-15 11:35:47.144719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.907 ms 00:24:04.478 [2024-11-15 11:35:47.144734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.166906] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:04.478 [2024-11-15 11:35:47.170877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.170913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.478 [2024-11-15 11:35:47.170931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.046 ms 00:24:04.478 [2024-11-15 11:35:47.170942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.242754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.242823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:04.478 [2024-11-15 11:35:47.242846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.771 ms 00:24:04.478 [2024-11-15 11:35:47.242858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.243092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.243115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.478 [2024-11-15 11:35:47.243133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:24:04.478 [2024-11-15 11:35:47.243145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.268193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.268233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:04.478 [2024-11-15 11:35:47.268268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.968 ms 00:24:04.478 [2024-11-15 11:35:47.268280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.292350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.292388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:04.478 [2024-11-15 11:35:47.292438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.018 ms 00:24:04.478 [2024-11-15 11:35:47.292448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.293195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.293225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:04.478 [2024-11-15 11:35:47.293243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:24:04.478 [2024-11-15 11:35:47.293257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.369512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.369557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:04.478 [2024-11-15 11:35:47.369580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.206 ms 00:24:04.478 [2024-11-15 11:35:47.369592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.395654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.395694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:04.478 [2024-11-15 11:35:47.395713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.974 ms 00:24:04.478 [2024-11-15 11:35:47.395724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.478 [2024-11-15 11:35:47.420122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.478 [2024-11-15 11:35:47.420160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:04.478 [2024-11-15 11:35:47.420178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.349 ms 00:24:04.478 [2024-11-15 11:35:47.420188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.737 [2024-11-15 11:35:47.444838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.737 [2024-11-15 11:35:47.444876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.737 [2024-11-15 11:35:47.444895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.609 ms 00:24:04.737 [2024-11-15 11:35:47.444905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.737 [2024-11-15 11:35:47.444954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.737 [2024-11-15 11:35:47.444969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.737 [2024-11-15 11:35:47.444985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:04.737 [2024-11-15 11:35:47.444996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.737 [2024-11-15 11:35:47.445165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.737 [2024-11-15 11:35:47.445199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.737 [2024-11-15 11:35:47.445218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:04.737 [2024-11-15 11:35:47.445230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.738 [2024-11-15 11:35:47.446658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3383.204 ms, result 0 00:24:04.738 { 00:24:04.738 "name": "ftl0", 00:24:04.738 "uuid": "6b15c40c-275b-44da-9182-71512f00bc3a" 00:24:04.738 } 00:24:04.738 11:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:04.738 11:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:04.997 11:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:04.997 11:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:04.997 11:35:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:05.256 /dev/nbd0 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:05.256 1+0 records in 00:24:05.256 1+0 records out 00:24:05.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038038 s, 10.8 MB/s 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:24:05.256 11:35:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:05.256 [2024-11-15 11:35:48.103645] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:24:05.256 [2024-11-15 11:35:48.103787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78759 ] 00:24:05.515 [2024-11-15 11:35:48.277067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.515 [2024-11-15 11:35:48.425739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.892  [2024-11-15T11:35:50.776Z] Copying: 207/1024 [MB] (207 MBps) [2024-11-15T11:35:51.709Z] Copying: 409/1024 [MB] (202 MBps) [2024-11-15T11:35:53.085Z] Copying: 612/1024 [MB] (202 MBps) [2024-11-15T11:35:54.020Z] Copying: 812/1024 [MB] (200 MBps) [2024-11-15T11:35:54.020Z] Copying: 997/1024 [MB] (185 MBps) [2024-11-15T11:35:54.955Z] Copying: 1024/1024 [MB] (average 199 MBps) 00:24:12.006 00:24:12.006 11:35:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:13.909 11:35:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:13.909 [2024-11-15 11:35:56.665236] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:24:13.909 [2024-11-15 11:35:56.665421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78853 ] 00:24:13.909 [2024-11-15 11:35:56.853266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.179 [2024-11-15 11:35:56.988368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.552  [2024-11-15T11:35:59.435Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-15T11:36:00.368Z] Copying: 29/1024 [MB] (14 MBps) [2024-11-15T11:36:01.302Z] Copying: 43/1024 [MB] (13 MBps) [2024-11-15T11:36:02.677Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-15T11:36:03.612Z] Copying: 73/1024 [MB] (15 MBps) [2024-11-15T11:36:04.546Z] Copying: 88/1024 [MB] (15 MBps) [2024-11-15T11:36:05.479Z] Copying: 104/1024 [MB] (15 MBps) [2024-11-15T11:36:06.413Z] Copying: 118/1024 [MB] (14 MBps) [2024-11-15T11:36:07.347Z] Copying: 133/1024 [MB] (14 MBps) [2024-11-15T11:36:08.281Z] Copying: 148/1024 [MB] (14 MBps) [2024-11-15T11:36:09.656Z] Copying: 163/1024 [MB] (15 MBps) [2024-11-15T11:36:10.591Z] Copying: 179/1024 [MB] (15 MBps) [2024-11-15T11:36:11.526Z] Copying: 194/1024 [MB] (15 MBps) [2024-11-15T11:36:12.502Z] Copying: 209/1024 [MB] (15 MBps) [2024-11-15T11:36:13.451Z] Copying: 225/1024 [MB] (15 MBps) [2024-11-15T11:36:14.386Z] Copying: 240/1024 [MB] (15 MBps) [2024-11-15T11:36:15.320Z] Copying: 255/1024 [MB] (15 MBps) [2024-11-15T11:36:16.696Z] Copying: 270/1024 [MB] (15 MBps) [2024-11-15T11:36:17.631Z] Copying: 285/1024 [MB] (15 MBps) [2024-11-15T11:36:18.566Z] Copying: 300/1024 [MB] (14 MBps) [2024-11-15T11:36:19.501Z] Copying: 315/1024 [MB] (15 MBps) [2024-11-15T11:36:20.437Z] Copying: 330/1024 [MB] (15 MBps) [2024-11-15T11:36:21.371Z] Copying: 346/1024 [MB] (15 MBps) [2024-11-15T11:36:22.307Z] Copying: 360/1024 [MB] (14 MBps) [2024-11-15T11:36:23.683Z] Copying: 375/1024 [MB] (14 MBps) [2024-11-15T11:36:24.616Z] Copying: 390/1024 [MB] (14 MBps) [2024-11-15T11:36:25.551Z] Copying: 405/1024 [MB] (14 MBps) [2024-11-15T11:36:26.489Z] Copying: 420/1024 [MB] (14 MBps) [2024-11-15T11:36:27.425Z] Copying: 435/1024 [MB] (14 MBps) [2024-11-15T11:36:28.360Z] Copying: 450/1024 [MB] (15 MBps) [2024-11-15T11:36:29.295Z] Copying: 465/1024 [MB] (15 MBps) [2024-11-15T11:36:30.669Z] Copying: 480/1024 [MB] (15 MBps) [2024-11-15T11:36:31.606Z] Copying: 496/1024 [MB] (15 MBps) [2024-11-15T11:36:32.540Z] Copying: 511/1024 [MB] (15 MBps) [2024-11-15T11:36:33.475Z] Copying: 526/1024 [MB] (14 MBps) [2024-11-15T11:36:34.410Z] Copying: 541/1024 [MB] (15 MBps) [2024-11-15T11:36:35.344Z] Copying: 556/1024 [MB] (15 MBps) [2024-11-15T11:36:36.278Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-15T11:36:37.653Z] Copying: 586/1024 [MB] (14 MBps) [2024-11-15T11:36:38.588Z] Copying: 601/1024 [MB] (15 MBps) [2024-11-15T11:36:39.522Z] Copying: 616/1024 [MB] (15 MBps) [2024-11-15T11:36:40.507Z] Copying: 631/1024 [MB] (14 MBps) [2024-11-15T11:36:41.442Z] Copying: 646/1024 [MB] (15 MBps) [2024-11-15T11:36:42.376Z] Copying: 661/1024 [MB] (15 MBps) [2024-11-15T11:36:43.310Z] Copying: 677/1024 [MB] (15 MBps) [2024-11-15T11:36:44.684Z] Copying: 692/1024 [MB] (15 MBps) [2024-11-15T11:36:45.617Z] Copying: 707/1024 [MB] (15 MBps) [2024-11-15T11:36:46.553Z] Copying: 722/1024 [MB] (14 MBps) [2024-11-15T11:36:47.488Z] Copying: 737/1024 [MB] (15 MBps) [2024-11-15T11:36:48.422Z] Copying: 752/1024 [MB] (15 MBps) [2024-11-15T11:36:49.356Z] Copying: 768/1024 [MB] (15 MBps) [2024-11-15T11:36:50.292Z] Copying: 783/1024 [MB] (15 MBps) [2024-11-15T11:36:51.666Z] Copying: 797/1024 [MB] (14 MBps) [2024-11-15T11:36:52.599Z] Copying: 812/1024 [MB] (15 MBps) [2024-11-15T11:36:53.533Z] Copying: 827/1024 [MB] (15 MBps) [2024-11-15T11:36:54.536Z] Copying: 842/1024 [MB] (14 MBps) [2024-11-15T11:36:55.480Z] Copying: 857/1024 [MB] (15 MBps) [2024-11-15T11:36:56.413Z] Copying: 872/1024 [MB] (14 MBps) [2024-11-15T11:36:57.346Z] Copying: 887/1024 [MB] (15 MBps) [2024-11-15T11:36:58.279Z] Copying: 902/1024 [MB] (14 MBps) [2024-11-15T11:36:59.653Z] Copying: 917/1024 [MB] (15 MBps) [2024-11-15T11:37:00.587Z] Copying: 932/1024 [MB] (14 MBps) [2024-11-15T11:37:01.521Z] Copying: 946/1024 [MB] (14 MBps) [2024-11-15T11:37:02.475Z] Copying: 961/1024 [MB] (15 MBps) [2024-11-15T11:37:03.409Z] Copying: 976/1024 [MB] (14 MBps) [2024-11-15T11:37:04.342Z] Copying: 992/1024 [MB] (15 MBps) [2024-11-15T11:37:05.275Z] Copying: 1007/1024 [MB] (15 MBps) [2024-11-15T11:37:05.533Z] Copying: 1022/1024 [MB] (14 MBps) [2024-11-15T11:37:06.466Z] Copying: 1024/1024 [MB] (average 15 MBps) 00:25:23.517 00:25:23.517 11:37:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:23.517 11:37:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:23.775 11:37:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:24.033 [2024-11-15 11:37:06.849622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.849677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:24.033 [2024-11-15 11:37:06.849697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:24.033 [2024-11-15 11:37:06.849710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.033 [2024-11-15 11:37:06.849747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.033 [2024-11-15 11:37:06.852987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.853017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:24.033 [2024-11-15 11:37:06.853041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:25:24.033 [2024-11-15 11:37:06.853053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.033 [2024-11-15 11:37:06.855206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.855243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:24.033 [2024-11-15 11:37:06.855261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:25:24.033 [2024-11-15 11:37:06.855273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.033 [2024-11-15 11:37:06.871203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.871246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:24.033 [2024-11-15 11:37:06.871282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.904 ms 00:25:24.033 [2024-11-15 11:37:06.871294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.033 [2024-11-15 11:37:06.876465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.876497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:24.033 [2024-11-15 11:37:06.876513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:25:24.033 [2024-11-15 11:37:06.876524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.033 [2024-11-15 11:37:06.901601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.033 [2024-11-15 11:37:06.901640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:24.034 [2024-11-15 11:37:06.901658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.959 ms 00:25:24.034 [2024-11-15 11:37:06.901669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.034 [2024-11-15 11:37:06.917779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.034 [2024-11-15 11:37:06.917819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:24.034 [2024-11-15 11:37:06.917838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.057 ms 00:25:24.034 [2024-11-15 11:37:06.917852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.034 [2024-11-15 11:37:06.918000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.034 [2024-11-15 11:37:06.918019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:24.034 [2024-11-15 11:37:06.918085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:25:24.034 [2024-11-15 11:37:06.918099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.034 [2024-11-15 11:37:06.943479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.034 [2024-11-15 11:37:06.943647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.034 [2024-11-15 11:37:06.943678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.354 ms 00:25:24.034 [2024-11-15 11:37:06.943691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.034 [2024-11-15 11:37:06.968852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.034 [2024-11-15 11:37:06.968891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.034 [2024-11-15 11:37:06.968926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.114 ms 00:25:24.034 [2024-11-15 11:37:06.968936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.293 [2024-11-15 11:37:06.996190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.293 [2024-11-15 11:37:06.996229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.293 [2024-11-15 11:37:06.996264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.205 ms 00:25:24.293 [2024-11-15 11:37:06.996275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.293 [2024-11-15 11:37:07.021791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.293 [2024-11-15 11:37:07.021829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.293 [2024-11-15 11:37:07.021864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.391 ms 00:25:24.293 [2024-11-15 11:37:07.021875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.293 [2024-11-15 11:37:07.021923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.293 [2024-11-15 11:37:07.021943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.021958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.021970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.021983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.021994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.293 [2024-11-15 11:37:07.022715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.022999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.294 [2024-11-15 11:37:07.023468] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.294 [2024-11-15 11:37:07.023482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b15c40c-275b-44da-9182-71512f00bc3a 00:25:24.294 [2024-11-15 11:37:07.023494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:24.294 [2024-11-15 11:37:07.023510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:24.294 [2024-11-15 11:37:07.023521] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:24.294 [2024-11-15 11:37:07.023538] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:24.294 [2024-11-15 11:37:07.023549] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.294 [2024-11-15 11:37:07.023572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.294 [2024-11-15 11:37:07.023584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.294 [2024-11-15 11:37:07.023597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.294 [2024-11-15 11:37:07.023607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.294 [2024-11-15 11:37:07.023620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.294 [2024-11-15 11:37:07.023640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.294 [2024-11-15 11:37:07.023655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.704 ms 00:25:24.294 [2024-11-15 11:37:07.023667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.294 [2024-11-15 11:37:07.038942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.294 [2024-11-15 11:37:07.038982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.294 [2024-11-15 11:37:07.039007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.227 ms 00:25:24.294 [2024-11-15 11:37:07.039018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.294 [2024-11-15 11:37:07.039569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.294 [2024-11-15 11:37:07.039601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.294 [2024-11-15 11:37:07.039619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:25:24.294 [2024-11-15 11:37:07.039631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.294 [2024-11-15 11:37:07.087252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.294 [2024-11-15 11:37:07.087295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.294 [2024-11-15 11:37:07.087331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.294 [2024-11-15 11:37:07.087342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.294 [2024-11-15 11:37:07.087415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.294 [2024-11-15 11:37:07.087430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.294 [2024-11-15 11:37:07.087459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.294 [2024-11-15 11:37:07.087470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.294 [2024-11-15 11:37:07.087582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.295 [2024-11-15 11:37:07.087603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.295 [2024-11-15 11:37:07.087617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.295 [2024-11-15 11:37:07.087628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.295 [2024-11-15 11:37:07.087658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.295 [2024-11-15 11:37:07.087670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.295 [2024-11-15 11:37:07.087684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.295 [2024-11-15 11:37:07.087694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.295 [2024-11-15 11:37:07.174302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.295 [2024-11-15 11:37:07.174358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.295 [2024-11-15 11:37:07.174395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.295 [2024-11-15 11:37:07.174406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.249124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.249336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:24.553 [2024-11-15 11:37:07.249371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.249385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.249540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.249558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:24.553 [2024-11-15 11:37:07.249573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.249587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.249672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.249689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:24.553 [2024-11-15 11:37:07.249703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.249714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.249840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.249859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:24.553 [2024-11-15 11:37:07.249873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.249903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.249968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.249984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:24.553 [2024-11-15 11:37:07.249997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.250008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.250054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.250067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:24.553 [2024-11-15 11:37:07.250080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.250090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.250211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.553 [2024-11-15 11:37:07.250244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:24.553 [2024-11-15 11:37:07.250259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.553 [2024-11-15 11:37:07.250271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.553 [2024-11-15 11:37:07.250428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.775 ms, result 0 00:25:24.553 true 00:25:24.553 11:37:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78617 00:25:24.553 11:37:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78617 00:25:24.553 11:37:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:24.553 [2024-11-15 11:37:07.384891] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:25:24.553 [2024-11-15 11:37:07.385358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79551 ] 00:25:24.811 [2024-11-15 11:37:07.566253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.811 [2024-11-15 11:37:07.663535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.194  [2024-11-15T11:37:10.076Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-15T11:37:11.009Z] Copying: 419/1024 [MB] (206 MBps) [2024-11-15T11:37:11.943Z] Copying: 627/1024 [MB] (207 MBps) [2024-11-15T11:37:13.317Z] Copying: 832/1024 [MB] (205 MBps) [2024-11-15T11:37:13.883Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:25:30.934 00:25:30.934 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78617 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:30.934 11:37:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.934 [2024-11-15 11:37:13.866402] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:25:30.934 [2024-11-15 11:37:13.866811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79618 ] 00:25:31.192 [2024-11-15 11:37:14.047173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.450 [2024-11-15 11:37:14.143943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.708 [2024-11-15 11:37:14.460901] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.708 [2024-11-15 11:37:14.460987] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:31.708 [2024-11-15 11:37:14.526662] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:31.708 [2024-11-15 11:37:14.526971] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:31.708 [2024-11-15 11:37:14.527197] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:31.965 [2024-11-15 11:37:14.798584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.798628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:31.965 [2024-11-15 11:37:14.798647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:31.965 [2024-11-15 11:37:14.798658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.798716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.798731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:31.965 [2024-11-15 11:37:14.798742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:31.965 [2024-11-15 11:37:14.798750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.798776] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:31.965 [2024-11-15 11:37:14.799599] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:31.965 [2024-11-15 11:37:14.799622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.799633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:31.965 [2024-11-15 11:37:14.799644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:25:31.965 [2024-11-15 11:37:14.799655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.801537] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:31.965 [2024-11-15 11:37:14.814976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.815193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:31.965 [2024-11-15 11:37:14.815220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.441 ms 00:25:31.965 [2024-11-15 11:37:14.815232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.815299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.815318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:31.965 [2024-11-15 11:37:14.815330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:31.965 [2024-11-15 11:37:14.815340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.823592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.823626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:31.965 [2024-11-15 11:37:14.823640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.173 ms 00:25:31.965 [2024-11-15 11:37:14.823650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.823730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.823747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:31.965 [2024-11-15 11:37:14.823773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:31.965 [2024-11-15 11:37:14.823783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.823846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.823863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:31.965 [2024-11-15 11:37:14.823874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:31.965 [2024-11-15 11:37:14.823884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.823913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:31.965 [2024-11-15 11:37:14.828284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.828316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:31.965 [2024-11-15 11:37:14.828330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:25:31.965 [2024-11-15 11:37:14.828340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.828371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.828385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:31.965 [2024-11-15 11:37:14.828395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:31.965 [2024-11-15 11:37:14.828405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.828460] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:31.965 [2024-11-15 11:37:14.828488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:31.965 [2024-11-15 11:37:14.828522] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:31.965 [2024-11-15 11:37:14.828540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:31.965 [2024-11-15 11:37:14.828628] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:31.965 [2024-11-15 11:37:14.828641] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:31.965 [2024-11-15 11:37:14.828652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:31.965 [2024-11-15 11:37:14.828664] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:31.965 [2024-11-15 11:37:14.828680] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:31.965 [2024-11-15 11:37:14.828691] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:31.965 [2024-11-15 11:37:14.828700] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:31.965 [2024-11-15 11:37:14.828709] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:31.965 [2024-11-15 11:37:14.828718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:31.965 [2024-11-15 11:37:14.828728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.828737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:31.965 [2024-11-15 11:37:14.828747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:31.965 [2024-11-15 11:37:14.828757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.828830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.965 [2024-11-15 11:37:14.828847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:31.965 [2024-11-15 11:37:14.828857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:31.965 [2024-11-15 11:37:14.828866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.965 [2024-11-15 11:37:14.828963] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:31.965 [2024-11-15 11:37:14.828981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:31.965 [2024-11-15 11:37:14.828992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.965 [2024-11-15 11:37:14.829002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.965 [2024-11-15 11:37:14.829011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:31.965 [2024-11-15 11:37:14.829020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:31.965 [2024-11-15 11:37:14.829065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:31.965 [2024-11-15 11:37:14.829090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:31.965 [2024-11-15 11:37:14.829100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:31.965 [2024-11-15 11:37:14.829110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.965 [2024-11-15 11:37:14.829125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:31.965 [2024-11-15 11:37:14.829146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:31.965 [2024-11-15 11:37:14.829155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:31.965 [2024-11-15 11:37:14.829181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:31.965 [2024-11-15 11:37:14.829191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:31.966 [2024-11-15 11:37:14.829201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:31.966 [2024-11-15 11:37:14.829221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:31.966 [2024-11-15 11:37:14.829249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:31.966 [2024-11-15 11:37:14.829277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:31.966 [2024-11-15 11:37:14.829304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:31.966 [2024-11-15 11:37:14.829332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:31.966 [2024-11-15 11:37:14.829359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.966 [2024-11-15 11:37:14.829377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:31.966 [2024-11-15 11:37:14.829386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:31.966 [2024-11-15 11:37:14.829395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:31.966 [2024-11-15 11:37:14.829405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:31.966 [2024-11-15 11:37:14.829413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:31.966 [2024-11-15 11:37:14.829422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:31.966 [2024-11-15 11:37:14.829456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:31.966 [2024-11-15 11:37:14.829465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829489] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:31.966 [2024-11-15 11:37:14.829500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:31.966 [2024-11-15 11:37:14.829511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:31.966 [2024-11-15 11:37:14.829536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:31.966 [2024-11-15 11:37:14.829562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:31.966 [2024-11-15 11:37:14.829572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:31.966 [2024-11-15 11:37:14.829582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:31.966 [2024-11-15 11:37:14.829591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:31.966 [2024-11-15 11:37:14.829600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:31.966 [2024-11-15 11:37:14.829611] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:31.966 [2024-11-15 11:37:14.829624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:31.966 [2024-11-15 11:37:14.829646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:31.966 [2024-11-15 11:37:14.829656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:31.966 [2024-11-15 11:37:14.829666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:31.966 [2024-11-15 11:37:14.829677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:31.966 [2024-11-15 11:37:14.829687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:31.966 [2024-11-15 11:37:14.829697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:31.966 [2024-11-15 11:37:14.829706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:31.966 [2024-11-15 11:37:14.829717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:31.966 [2024-11-15 11:37:14.829727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:31.966 [2024-11-15 11:37:14.829779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:31.966 [2024-11-15 11:37:14.829790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:31.966 [2024-11-15 11:37:14.829812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:31.966 [2024-11-15 11:37:14.829822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:31.966 [2024-11-15 11:37:14.829832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:31.966 [2024-11-15 11:37:14.829843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.829854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:31.966 [2024-11-15 11:37:14.829865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:25:31.966 [2024-11-15 11:37:14.829875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-11-15 11:37:14.864184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.864524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:31.966 [2024-11-15 11:37:14.864555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.247 ms 00:25:31.966 [2024-11-15 11:37:14.864579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-11-15 11:37:14.864694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.864717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:31.966 [2024-11-15 11:37:14.864730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:31.966 [2024-11-15 11:37:14.864741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-11-15 11:37:14.911536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.911599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:31.966 [2024-11-15 11:37:14.911639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.702 ms 00:25:31.966 [2024-11-15 11:37:14.911650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-11-15 11:37:14.911726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.911743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:31.966 [2024-11-15 11:37:14.911755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:31.966 [2024-11-15 11:37:14.911765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:31.966 [2024-11-15 11:37:14.912517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:31.966 [2024-11-15 11:37:14.912552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:31.966 [2024-11-15 11:37:14.912569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:25:31.966 [2024-11-15 11:37:14.912595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.912774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.912799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:32.224 [2024-11-15 11:37:14.912812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:32.224 [2024-11-15 11:37:14.912823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.930052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.930100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:32.224 [2024-11-15 11:37:14.930116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.203 ms 00:25:32.224 [2024-11-15 11:37:14.930126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.945014] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:32.224 [2024-11-15 11:37:14.945342] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:32.224 [2024-11-15 11:37:14.945523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.945568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:32.224 [2024-11-15 11:37:14.945680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.267 ms 00:25:32.224 [2024-11-15 11:37:14.945726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.970969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.971264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:32.224 [2024-11-15 11:37:14.971414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:25:32.224 [2024-11-15 11:37:14.971460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.984534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.984705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:32.224 [2024-11-15 11:37:14.984806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.983 ms 00:25:32.224 [2024-11-15 11:37:14.984849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.998124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.998298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:32.224 [2024-11-15 11:37:14.998455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.208 ms 00:25:32.224 [2024-11-15 11:37:14.998506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:14.999488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:14.999629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:32.224 [2024-11-15 11:37:14.999723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:25:32.224 [2024-11-15 11:37:14.999768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:15.068116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:15.068438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:32.224 [2024-11-15 11:37:15.068469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.295 ms 00:25:32.224 [2024-11-15 11:37:15.068483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:15.080323] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:32.224 [2024-11-15 11:37:15.084351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.224 [2024-11-15 11:37:15.084385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:32.224 [2024-11-15 11:37:15.084401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.796 ms 00:25:32.224 [2024-11-15 11:37:15.084413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.224 [2024-11-15 11:37:15.084529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.084549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:32.225 [2024-11-15 11:37:15.084562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:32.225 [2024-11-15 11:37:15.084573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.084666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.084684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:32.225 [2024-11-15 11:37:15.084696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:32.225 [2024-11-15 11:37:15.084706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.084737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.084757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:32.225 [2024-11-15 11:37:15.084768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:32.225 [2024-11-15 11:37:15.084779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.084819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:32.225 [2024-11-15 11:37:15.084836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.084847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:32.225 [2024-11-15 11:37:15.084858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:32.225 [2024-11-15 11:37:15.084869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.112028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.112113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:32.225 [2024-11-15 11:37:15.112147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.131 ms 00:25:32.225 [2024-11-15 11:37:15.112158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.112262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.225 [2024-11-15 11:37:15.112281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:32.225 [2024-11-15 11:37:15.112295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:32.225 [2024-11-15 11:37:15.112305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.225 [2024-11-15 11:37:15.113840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.664 ms, result 0 00:25:33.601  [2024-11-15T11:37:17.485Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-15T11:37:18.421Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-15T11:37:19.356Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-15T11:37:20.290Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-15T11:37:21.224Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-15T11:37:22.177Z] Copying: 140/1024 [MB] (23 MBps) [2024-11-15T11:37:23.152Z] Copying: 164/1024 [MB] (24 MBps) [2024-11-15T11:37:24.526Z] Copying: 188/1024 [MB] (24 MBps) [2024-11-15T11:37:25.461Z] Copying: 213/1024 [MB] (24 MBps) [2024-11-15T11:37:26.396Z] Copying: 237/1024 [MB] (24 MBps) [2024-11-15T11:37:27.333Z] Copying: 261/1024 [MB] (23 MBps) [2024-11-15T11:37:28.270Z] Copying: 284/1024 [MB] (22 MBps) [2024-11-15T11:37:29.211Z] Copying: 307/1024 [MB] (22 MBps) [2024-11-15T11:37:30.148Z] Copying: 329/1024 [MB] (22 MBps) [2024-11-15T11:37:31.526Z] Copying: 353/1024 [MB] (23 MBps) [2024-11-15T11:37:32.461Z] Copying: 376/1024 [MB] (23 MBps) [2024-11-15T11:37:33.397Z] Copying: 399/1024 [MB] (22 MBps) [2024-11-15T11:37:34.333Z] Copying: 421/1024 [MB] (22 MBps) [2024-11-15T11:37:35.276Z] Copying: 444/1024 [MB] (22 MBps) [2024-11-15T11:37:36.213Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-15T11:37:37.146Z] Copying: 490/1024 [MB] (23 MBps) [2024-11-15T11:37:38.517Z] Copying: 515/1024 [MB] (24 MBps) [2024-11-15T11:37:39.453Z] Copying: 538/1024 [MB] (23 MBps) [2024-11-15T11:37:40.386Z] Copying: 563/1024 [MB] (24 MBps) [2024-11-15T11:37:41.318Z] Copying: 587/1024 [MB] (24 MBps) [2024-11-15T11:37:42.253Z] Copying: 611/1024 [MB] (23 MBps) [2024-11-15T11:37:43.201Z] Copying: 635/1024 [MB] (24 MBps) [2024-11-15T11:37:44.136Z] Copying: 659/1024 [MB] (24 MBps) [2024-11-15T11:37:45.511Z] Copying: 683/1024 [MB] (24 MBps) [2024-11-15T11:37:46.448Z] Copying: 707/1024 [MB] (23 MBps) [2024-11-15T11:37:47.384Z] Copying: 731/1024 [MB] (24 MBps) [2024-11-15T11:37:48.320Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-15T11:37:49.255Z] Copying: 779/1024 [MB] (23 MBps) [2024-11-15T11:37:50.190Z] Copying: 803/1024 [MB] (23 MBps) [2024-11-15T11:37:51.565Z] Copying: 827/1024 [MB] (24 MBps) [2024-11-15T11:37:52.131Z] Copying: 852/1024 [MB] (24 MBps) [2024-11-15T11:37:53.507Z] Copying: 876/1024 [MB] (24 MBps) [2024-11-15T11:37:54.442Z] Copying: 900/1024 [MB] (24 MBps) [2024-11-15T11:37:55.376Z] Copying: 925/1024 [MB] (24 MBps) [2024-11-15T11:37:56.312Z] Copying: 950/1024 [MB] (24 MBps) [2024-11-15T11:37:57.246Z] Copying: 974/1024 [MB] (24 MBps) [2024-11-15T11:37:58.182Z] Copying: 998/1024 [MB] (24 MBps) [2024-11-15T11:37:59.557Z] Copying: 1022/1024 [MB] (24 MBps) [2024-11-15T11:37:59.557Z] Copying: 1048472/1048576 [kB] (1136 kBps) [2024-11-15T11:37:59.557Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:37:59.260516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.260660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:16.608 [2024-11-15 11:37:59.260698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.608 [2024-11-15 11:37:59.260712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.263965] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:16.608 [2024-11-15 11:37:59.270194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.270232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:16.608 [2024-11-15 11:37:59.270262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.169 ms 00:26:16.608 [2024-11-15 11:37:59.270273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.281530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.281572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:16.608 [2024-11-15 11:37:59.281605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.313 ms 00:26:16.608 [2024-11-15 11:37:59.281616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.303639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.303681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:16.608 [2024-11-15 11:37:59.303712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.003 ms 00:26:16.608 [2024-11-15 11:37:59.303722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.309131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.309327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:16.608 [2024-11-15 11:37:59.309353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.375 ms 00:26:16.608 [2024-11-15 11:37:59.309365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.335794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.335984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:16.608 [2024-11-15 11:37:59.336009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.362 ms 00:26:16.608 [2024-11-15 11:37:59.336020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.351758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.351799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:16.608 [2024-11-15 11:37:59.351830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.662 ms 00:26:16.608 [2024-11-15 11:37:59.351841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.608 [2024-11-15 11:37:59.470828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.608 [2024-11-15 11:37:59.470873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:16.608 [2024-11-15 11:37:59.470914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.927 ms 00:26:16.609 [2024-11-15 11:37:59.470926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.609 [2024-11-15 11:37:59.499694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.609 [2024-11-15 11:37:59.499844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:16.609 [2024-11-15 11:37:59.499956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.748 ms 00:26:16.609 [2024-11-15 11:37:59.499977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.609 [2024-11-15 11:37:59.525287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.609 [2024-11-15 11:37:59.525494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:16.609 [2024-11-15 11:37:59.525518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.269 ms 00:26:16.609 [2024-11-15 11:37:59.525529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.609 [2024-11-15 11:37:59.549645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.609 [2024-11-15 11:37:59.549685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:16.609 [2024-11-15 11:37:59.549699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.077 ms 00:26:16.609 [2024-11-15 11:37:59.549708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.868 [2024-11-15 11:37:59.573479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.868 [2024-11-15 11:37:59.573518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:16.868 [2024-11-15 11:37:59.573532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.711 ms 00:26:16.868 [2024-11-15 11:37:59.573542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.868 [2024-11-15 11:37:59.573578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:16.868 [2024-11-15 11:37:59.573598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:26:16.868 [2024-11-15 11:37:59.573611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.868 [2024-11-15 11:37:59.573780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.573990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:16.869 [2024-11-15 11:37:59.574763] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:16.869 [2024-11-15 11:37:59.574773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b15c40c-275b-44da-9182-71512f00bc3a 00:26:16.869 [2024-11-15 11:37:59.574784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:26:16.869 [2024-11-15 11:37:59.574798] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130496 00:26:16.869 [2024-11-15 11:37:59.574819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:26:16.869 [2024-11-15 11:37:59.574830] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:26:16.869 [2024-11-15 11:37:59.574840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:16.869 [2024-11-15 11:37:59.574851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:16.869 [2024-11-15 11:37:59.574861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:16.869 [2024-11-15 11:37:59.574871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:16.869 [2024-11-15 11:37:59.574880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:16.870 [2024-11-15 11:37:59.574890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.870 [2024-11-15 11:37:59.574900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:16.870 [2024-11-15 11:37:59.574910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:26:16.870 [2024-11-15 11:37:59.574920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.588819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.870 [2024-11-15 11:37:59.588853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:16.870 [2024-11-15 11:37:59.588867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.877 ms 00:26:16.870 [2024-11-15 11:37:59.588877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.589409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.870 [2024-11-15 11:37:59.589452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:16.870 [2024-11-15 11:37:59.589480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:26:16.870 [2024-11-15 11:37:59.589496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.625330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.625370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.870 [2024-11-15 11:37:59.625415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.625426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.625479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.625492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.870 [2024-11-15 11:37:59.625502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.625517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.625578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.625595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.870 [2024-11-15 11:37:59.625606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.625615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.625633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.625645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.870 [2024-11-15 11:37:59.625655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.625664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.709393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.709641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.870 [2024-11-15 11:37:59.709667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.709680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.870 [2024-11-15 11:37:59.778181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.870 [2024-11-15 11:37:59.778315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.870 [2024-11-15 11:37:59.778391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.870 [2024-11-15 11:37:59.778540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:16.870 [2024-11-15 11:37:59.778624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.870 [2024-11-15 11:37:59.778706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:16.870 [2024-11-15 11:37:59.778776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.870 [2024-11-15 11:37:59.778786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:16.870 [2024-11-15 11:37:59.778796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.870 [2024-11-15 11:37:59.778922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.556 ms, result 0 00:26:18.771 00:26:18.771 00:26:18.771 11:38:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:20.147 11:38:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:20.405 [2024-11-15 11:38:03.132439] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:26:20.405 [2024-11-15 11:38:03.132612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80095 ] 00:26:20.405 [2024-11-15 11:38:03.324840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.670 [2024-11-15 11:38:03.465926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.948 [2024-11-15 11:38:03.777265] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:20.948 [2024-11-15 11:38:03.777361] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:21.222 [2024-11-15 11:38:03.935851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.936119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:21.222 [2024-11-15 11:38:03.936158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:21.222 [2024-11-15 11:38:03.936171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.936240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.936257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.222 [2024-11-15 11:38:03.936274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:21.222 [2024-11-15 11:38:03.936285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.936314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:21.222 [2024-11-15 11:38:03.937153] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:21.222 [2024-11-15 11:38:03.937182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.937195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.222 [2024-11-15 11:38:03.937206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:26:21.222 [2024-11-15 11:38:03.937216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.939147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:21.222 [2024-11-15 11:38:03.952986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.953024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:21.222 [2024-11-15 11:38:03.953113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.840 ms 00:26:21.222 [2024-11-15 11:38:03.953126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.953215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.953248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:21.222 [2024-11-15 11:38:03.953260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:21.222 [2024-11-15 11:38:03.953271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.961448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.961487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.222 [2024-11-15 11:38:03.961501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.111 ms 00:26:21.222 [2024-11-15 11:38:03.961519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.961599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.961615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.222 [2024-11-15 11:38:03.961626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:26:21.222 [2024-11-15 11:38:03.961636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.961700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.961716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:21.222 [2024-11-15 11:38:03.961727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:21.222 [2024-11-15 11:38:03.961737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.961768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:21.222 [2024-11-15 11:38:03.966021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.966082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.222 [2024-11-15 11:38:03.966113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.264 ms 00:26:21.222 [2024-11-15 11:38:03.966129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.966163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.222 [2024-11-15 11:38:03.966177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:21.222 [2024-11-15 11:38:03.966188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:21.222 [2024-11-15 11:38:03.966198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.222 [2024-11-15 11:38:03.966241] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:21.222 [2024-11-15 11:38:03.966268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:21.222 [2024-11-15 11:38:03.966353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:21.222 [2024-11-15 11:38:03.966376] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:21.222 [2024-11-15 11:38:03.966495] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:21.223 [2024-11-15 11:38:03.966516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:21.223 [2024-11-15 11:38:03.966531] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:21.223 [2024-11-15 11:38:03.966545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:21.223 [2024-11-15 11:38:03.966557] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:21.223 [2024-11-15 11:38:03.966569] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:21.223 [2024-11-15 11:38:03.966580] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:21.223 [2024-11-15 11:38:03.966591] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:21.223 [2024-11-15 11:38:03.966606] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:21.223 [2024-11-15 11:38:03.966618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.223 [2024-11-15 11:38:03.966629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:21.223 [2024-11-15 11:38:03.966640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.380 ms 00:26:21.223 [2024-11-15 11:38:03.966651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.223 [2024-11-15 11:38:03.966734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.223 [2024-11-15 11:38:03.966747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:21.223 [2024-11-15 11:38:03.966759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:21.223 [2024-11-15 11:38:03.966769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.223 [2024-11-15 11:38:03.966896] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:21.223 [2024-11-15 11:38:03.966914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:21.223 [2024-11-15 11:38:03.966925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.223 [2024-11-15 11:38:03.966936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.966947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:21.223 [2024-11-15 11:38:03.966957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.966967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:21.223 [2024-11-15 11:38:03.966977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:21.223 [2024-11-15 11:38:03.966987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:21.223 [2024-11-15 11:38:03.966997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.223 [2024-11-15 11:38:03.967007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:21.223 [2024-11-15 11:38:03.967017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:21.223 [2024-11-15 11:38:03.967040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:21.223 [2024-11-15 11:38:03.967050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:21.223 [2024-11-15 11:38:03.967066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:21.223 [2024-11-15 11:38:03.967087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:21.223 [2024-11-15 11:38:03.967107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:21.223 [2024-11-15 11:38:03.967153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:21.223 [2024-11-15 11:38:03.967185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:21.223 [2024-11-15 11:38:03.967214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:21.223 [2024-11-15 11:38:03.967259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:21.223 [2024-11-15 11:38:03.967303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.223 [2024-11-15 11:38:03.967320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:21.223 [2024-11-15 11:38:03.967329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:21.223 [2024-11-15 11:38:03.967338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:21.223 [2024-11-15 11:38:03.967347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:21.223 [2024-11-15 11:38:03.967356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:21.223 [2024-11-15 11:38:03.967364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:21.223 [2024-11-15 11:38:03.967382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:21.223 [2024-11-15 11:38:03.967391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:21.223 [2024-11-15 11:38:03.967417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:21.223 [2024-11-15 11:38:03.967427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:21.223 [2024-11-15 11:38:03.967452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:21.223 [2024-11-15 11:38:03.967461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:21.223 [2024-11-15 11:38:03.967470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:21.223 [2024-11-15 11:38:03.967479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:21.223 [2024-11-15 11:38:03.967488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:21.223 [2024-11-15 11:38:03.967497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:21.223 [2024-11-15 11:38:03.967508] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:21.223 [2024-11-15 11:38:03.967521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:21.223 [2024-11-15 11:38:03.967542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:21.223 [2024-11-15 11:38:03.967551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:21.223 [2024-11-15 11:38:03.967561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:21.223 [2024-11-15 11:38:03.967571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:21.223 [2024-11-15 11:38:03.967580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:21.223 [2024-11-15 11:38:03.967590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:21.223 [2024-11-15 11:38:03.967599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:21.223 [2024-11-15 11:38:03.967619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:21.223 [2024-11-15 11:38:03.967631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:21.223 [2024-11-15 11:38:03.967684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:21.223 [2024-11-15 11:38:03.967701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:21.223 [2024-11-15 11:38:03.967722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:21.223 [2024-11-15 11:38:03.967732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:21.223 [2024-11-15 11:38:03.967741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:21.223 [2024-11-15 11:38:03.967752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.223 [2024-11-15 11:38:03.967762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:21.223 [2024-11-15 11:38:03.967773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:26:21.223 [2024-11-15 11:38:03.967787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.223 [2024-11-15 11:38:04.001854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.223 [2024-11-15 11:38:04.002145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.223 [2024-11-15 11:38:04.002288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.011 ms 00:26:21.223 [2024-11-15 11:38:04.002337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.002597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.002722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:21.224 [2024-11-15 11:38:04.002834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:21.224 [2024-11-15 11:38:04.002881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.047644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.047843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.224 [2024-11-15 11:38:04.047955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.579 ms 00:26:21.224 [2024-11-15 11:38:04.048001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.048098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.048212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.224 [2024-11-15 11:38:04.048266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:21.224 [2024-11-15 11:38:04.048302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.049028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.049239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.224 [2024-11-15 11:38:04.049344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:26:21.224 [2024-11-15 11:38:04.049390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.049609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.049656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.224 [2024-11-15 11:38:04.049752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:21.224 [2024-11-15 11:38:04.049807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.066353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.066519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.224 [2024-11-15 11:38:04.066630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.430 ms 00:26:21.224 [2024-11-15 11:38:04.066675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.080391] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:21.224 [2024-11-15 11:38:04.080598] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:21.224 [2024-11-15 11:38:04.080720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.080760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:21.224 [2024-11-15 11:38:04.080858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.898 ms 00:26:21.224 [2024-11-15 11:38:04.080901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.104459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.104628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:21.224 [2024-11-15 11:38:04.104653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.491 ms 00:26:21.224 [2024-11-15 11:38:04.104666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.117152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.117361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:21.224 [2024-11-15 11:38:04.117387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.435 ms 00:26:21.224 [2024-11-15 11:38:04.117400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.129564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.129601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:21.224 [2024-11-15 11:38:04.129615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.108 ms 00:26:21.224 [2024-11-15 11:38:04.129625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.224 [2024-11-15 11:38:04.130342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.224 [2024-11-15 11:38:04.130373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:21.224 [2024-11-15 11:38:04.130387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:26:21.224 [2024-11-15 11:38:04.130402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.193488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.193556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:21.483 [2024-11-15 11:38:04.193615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.062 ms 00:26:21.483 [2024-11-15 11:38:04.193627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.203652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:21.483 [2024-11-15 11:38:04.205697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.205727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:21.483 [2024-11-15 11:38:04.205742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.998 ms 00:26:21.483 [2024-11-15 11:38:04.205752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.205835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.205853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:21.483 [2024-11-15 11:38:04.205865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:21.483 [2024-11-15 11:38:04.205879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.207769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.207799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:21.483 [2024-11-15 11:38:04.207812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:26:21.483 [2024-11-15 11:38:04.207822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.207852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.207866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:21.483 [2024-11-15 11:38:04.207877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:21.483 [2024-11-15 11:38:04.207886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.207927] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:21.483 [2024-11-15 11:38:04.207941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.207950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:21.483 [2024-11-15 11:38:04.207961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:21.483 [2024-11-15 11:38:04.207970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.232758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.232798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:21.483 [2024-11-15 11:38:04.232813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.767 ms 00:26:21.483 [2024-11-15 11:38:04.232829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.232904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:21.483 [2024-11-15 11:38:04.232921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:21.483 [2024-11-15 11:38:04.232932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:21.483 [2024-11-15 11:38:04.232942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.483 [2024-11-15 11:38:04.234552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.035 ms, result 0 00:26:22.858  [2024-11-15T11:38:06.742Z] Copying: 980/1048576 [kB] (980 kBps) [2024-11-15T11:38:07.679Z] Copying: 5612/1048576 [kB] (4632 kBps) [2024-11-15T11:38:08.614Z] Copying: 31/1024 [MB] (26 MBps) [2024-11-15T11:38:09.550Z] Copying: 60/1024 [MB] (28 MBps) [2024-11-15T11:38:10.485Z] Copying: 89/1024 [MB] (28 MBps) [2024-11-15T11:38:11.419Z] Copying: 116/1024 [MB] (27 MBps) [2024-11-15T11:38:12.796Z] Copying: 144/1024 [MB] (27 MBps) [2024-11-15T11:38:13.732Z] Copying: 172/1024 [MB] (27 MBps) [2024-11-15T11:38:14.667Z] Copying: 200/1024 [MB] (27 MBps) [2024-11-15T11:38:15.602Z] Copying: 227/1024 [MB] (27 MBps) [2024-11-15T11:38:16.537Z] Copying: 255/1024 [MB] (27 MBps) [2024-11-15T11:38:17.478Z] Copying: 280/1024 [MB] (25 MBps) [2024-11-15T11:38:18.855Z] Copying: 305/1024 [MB] (24 MBps) [2024-11-15T11:38:19.422Z] Copying: 329/1024 [MB] (24 MBps) [2024-11-15T11:38:20.799Z] Copying: 354/1024 [MB] (24 MBps) [2024-11-15T11:38:21.735Z] Copying: 378/1024 [MB] (24 MBps) [2024-11-15T11:38:22.752Z] Copying: 402/1024 [MB] (24 MBps) [2024-11-15T11:38:23.688Z] Copying: 427/1024 [MB] (24 MBps) [2024-11-15T11:38:24.625Z] Copying: 451/1024 [MB] (24 MBps) [2024-11-15T11:38:25.575Z] Copying: 476/1024 [MB] (24 MBps) [2024-11-15T11:38:26.510Z] Copying: 500/1024 [MB] (24 MBps) [2024-11-15T11:38:27.445Z] Copying: 525/1024 [MB] (24 MBps) [2024-11-15T11:38:28.822Z] Copying: 549/1024 [MB] (24 MBps) [2024-11-15T11:38:29.757Z] Copying: 573/1024 [MB] (24 MBps) [2024-11-15T11:38:30.692Z] Copying: 598/1024 [MB] (24 MBps) [2024-11-15T11:38:31.627Z] Copying: 623/1024 [MB] (24 MBps) [2024-11-15T11:38:32.563Z] Copying: 647/1024 [MB] (24 MBps) [2024-11-15T11:38:33.567Z] Copying: 671/1024 [MB] (24 MBps) [2024-11-15T11:38:34.504Z] Copying: 696/1024 [MB] (24 MBps) [2024-11-15T11:38:35.438Z] Copying: 720/1024 [MB] (24 MBps) [2024-11-15T11:38:36.814Z] Copying: 745/1024 [MB] (24 MBps) [2024-11-15T11:38:37.749Z] Copying: 769/1024 [MB] (24 MBps) [2024-11-15T11:38:38.685Z] Copying: 794/1024 [MB] (24 MBps) [2024-11-15T11:38:39.620Z] Copying: 818/1024 [MB] (24 MBps) [2024-11-15T11:38:40.555Z] Copying: 842/1024 [MB] (24 MBps) [2024-11-15T11:38:41.491Z] Copying: 866/1024 [MB] (23 MBps) [2024-11-15T11:38:42.426Z] Copying: 890/1024 [MB] (23 MBps) [2024-11-15T11:38:43.803Z] Copying: 912/1024 [MB] (22 MBps) [2024-11-15T11:38:44.739Z] Copying: 935/1024 [MB] (22 MBps) [2024-11-15T11:38:45.674Z] Copying: 958/1024 [MB] (23 MBps) [2024-11-15T11:38:46.611Z] Copying: 982/1024 [MB] (23 MBps) [2024-11-15T11:38:47.548Z] Copying: 1005/1024 [MB] (23 MBps) [2024-11-15T11:38:47.548Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:38:47.476629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.476922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:04.599 [2024-11-15 11:38:47.477118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:04.599 [2024-11-15 11:38:47.477270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.599 [2024-11-15 11:38:47.477362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:04.599 [2024-11-15 11:38:47.481282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.481473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:04.599 [2024-11-15 11:38:47.481628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:27:04.599 [2024-11-15 11:38:47.481687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.599 [2024-11-15 11:38:47.482205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.482370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:04.599 [2024-11-15 11:38:47.482559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:04.599 [2024-11-15 11:38:47.482587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.599 [2024-11-15 11:38:47.495073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.495210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:04.599 [2024-11-15 11:38:47.495337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.439 ms 00:27:04.599 [2024-11-15 11:38:47.495399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.599 [2024-11-15 11:38:47.502498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.502668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:04.599 [2024-11-15 11:38:47.502727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:27:04.599 [2024-11-15 11:38:47.502743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.599 [2024-11-15 11:38:47.531951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.599 [2024-11-15 11:38:47.532168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:04.599 [2024-11-15 11:38:47.532360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.119 ms 00:27:04.599 [2024-11-15 11:38:47.532506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.549146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.549342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:04.860 [2024-11-15 11:38:47.549554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.515 ms 00:27:04.860 [2024-11-15 11:38:47.549611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.551432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.551620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:04.860 [2024-11-15 11:38:47.551769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:27:04.860 [2024-11-15 11:38:47.551897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.579651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.579863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:04.860 [2024-11-15 11:38:47.580121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.548 ms 00:27:04.860 [2024-11-15 11:38:47.580274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.607195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.607376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:04.860 [2024-11-15 11:38:47.607577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.684 ms 00:27:04.860 [2024-11-15 11:38:47.607636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.633733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.633919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:04.860 [2024-11-15 11:38:47.634075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.987 ms 00:27:04.860 [2024-11-15 11:38:47.634137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.659296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.860 [2024-11-15 11:38:47.659488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:04.860 [2024-11-15 11:38:47.659622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.930 ms 00:27:04.860 [2024-11-15 11:38:47.659677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.860 [2024-11-15 11:38:47.659932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:04.860 [2024-11-15 11:38:47.660104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:04.860 [2024-11-15 11:38:47.660273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:04.860 [2024-11-15 11:38:47.660457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.660988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:04.860 [2024-11-15 11:38:47.661947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.661960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.661974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.661989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:04.861 [2024-11-15 11:38:47.662639] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:04.861 [2024-11-15 11:38:47.662652] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b15c40c-275b-44da-9182-71512f00bc3a 00:27:04.861 [2024-11-15 11:38:47.662666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:04.861 [2024-11-15 11:38:47.662677] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135104 00:27:04.861 [2024-11-15 11:38:47.662690] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133120 00:27:04.861 [2024-11-15 11:38:47.662712] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:27:04.861 [2024-11-15 11:38:47.662725] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:04.861 [2024-11-15 11:38:47.662738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:04.861 [2024-11-15 11:38:47.662750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:04.861 [2024-11-15 11:38:47.662776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:04.861 [2024-11-15 11:38:47.662788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:04.861 [2024-11-15 11:38:47.662801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.861 [2024-11-15 11:38:47.662814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:04.861 [2024-11-15 11:38:47.662827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:27:04.861 [2024-11-15 11:38:47.662852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.677890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.861 [2024-11-15 11:38:47.677938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:04.861 [2024-11-15 11:38:47.677974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.988 ms 00:27:04.861 [2024-11-15 11:38:47.677987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.678548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:04.861 [2024-11-15 11:38:47.678600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:04.861 [2024-11-15 11:38:47.678618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:27:04.861 [2024-11-15 11:38:47.678632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.719772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.861 [2024-11-15 11:38:47.719818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:04.861 [2024-11-15 11:38:47.719854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.861 [2024-11-15 11:38:47.719868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.719934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.861 [2024-11-15 11:38:47.719952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:04.861 [2024-11-15 11:38:47.719967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.861 [2024-11-15 11:38:47.720011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.720185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.861 [2024-11-15 11:38:47.720210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:04.861 [2024-11-15 11:38:47.720226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.861 [2024-11-15 11:38:47.720241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:04.861 [2024-11-15 11:38:47.720270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:04.861 [2024-11-15 11:38:47.720287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:04.861 [2024-11-15 11:38:47.720302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:04.861 [2024-11-15 11:38:47.720315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.812979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.813099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:05.121 [2024-11-15 11:38:47.813140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.813154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.886395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.886467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:05.121 [2024-11-15 11:38:47.886489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.886503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.886577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.886607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:05.121 [2024-11-15 11:38:47.886637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.886657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.886758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.886779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:05.121 [2024-11-15 11:38:47.886794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.886807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.886936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.886966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:05.121 [2024-11-15 11:38:47.886990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.887005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.887081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.887131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:05.121 [2024-11-15 11:38:47.887148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.887162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.887214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.887234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:05.121 [2024-11-15 11:38:47.887248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.887271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.887329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:05.121 [2024-11-15 11:38:47.887350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:05.121 [2024-11-15 11:38:47.887365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:05.121 [2024-11-15 11:38:47.887393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:05.121 [2024-11-15 11:38:47.887584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 410.921 ms, result 0 00:27:06.058 00:27:06.058 00:27:06.058 11:38:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:07.964 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:07.964 11:38:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:07.964 [2024-11-15 11:38:50.641409] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:27:07.964 [2024-11-15 11:38:50.641569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80575 ] 00:27:07.965 [2024-11-15 11:38:50.815847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.223 [2024-11-15 11:38:50.951872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.481 [2024-11-15 11:38:51.273126] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:08.481 [2024-11-15 11:38:51.273249] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:08.741 [2024-11-15 11:38:51.435320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.435393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:08.741 [2024-11-15 11:38:51.435441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:08.741 [2024-11-15 11:38:51.435454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.435521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.435542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.741 [2024-11-15 11:38:51.435561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:08.741 [2024-11-15 11:38:51.435573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.435607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:08.741 [2024-11-15 11:38:51.436541] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:08.741 [2024-11-15 11:38:51.436601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.436617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.741 [2024-11-15 11:38:51.436632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:27:08.741 [2024-11-15 11:38:51.436645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.438614] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:08.741 [2024-11-15 11:38:51.452789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.452852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:08.741 [2024-11-15 11:38:51.452888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.176 ms 00:27:08.741 [2024-11-15 11:38:51.452901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.452998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.453039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:08.741 [2024-11-15 11:38:51.453059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:08.741 [2024-11-15 11:38:51.453071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.461700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.461758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.741 [2024-11-15 11:38:51.461800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.462 ms 00:27:08.741 [2024-11-15 11:38:51.461813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.461906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.461927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.741 [2024-11-15 11:38:51.461941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:27:08.741 [2024-11-15 11:38:51.461953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.462063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.462084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:08.741 [2024-11-15 11:38:51.462122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:08.741 [2024-11-15 11:38:51.462136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.462184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:08.741 [2024-11-15 11:38:51.466714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.466774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.741 [2024-11-15 11:38:51.466816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.539 ms 00:27:08.741 [2024-11-15 11:38:51.466829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.466875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.466895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:08.741 [2024-11-15 11:38:51.466908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:08.741 [2024-11-15 11:38:51.466920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.466993] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:08.741 [2024-11-15 11:38:51.467047] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:08.741 [2024-11-15 11:38:51.467107] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:08.741 [2024-11-15 11:38:51.467138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:08.741 [2024-11-15 11:38:51.467245] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:08.741 [2024-11-15 11:38:51.467263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:08.741 [2024-11-15 11:38:51.467280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:08.741 [2024-11-15 11:38:51.467296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:08.741 [2024-11-15 11:38:51.467311] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:08.741 [2024-11-15 11:38:51.467324] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:08.741 [2024-11-15 11:38:51.467337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:08.741 [2024-11-15 11:38:51.467356] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:08.741 [2024-11-15 11:38:51.467369] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:08.741 [2024-11-15 11:38:51.467403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.467427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:08.741 [2024-11-15 11:38:51.467442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:27:08.741 [2024-11-15 11:38:51.467456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.467555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.741 [2024-11-15 11:38:51.467575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:08.741 [2024-11-15 11:38:51.467589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:08.741 [2024-11-15 11:38:51.467602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.741 [2024-11-15 11:38:51.467728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:08.741 [2024-11-15 11:38:51.467764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:08.741 [2024-11-15 11:38:51.467781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.741 [2024-11-15 11:38:51.467794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.741 [2024-11-15 11:38:51.467807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:08.741 [2024-11-15 11:38:51.467820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:08.741 [2024-11-15 11:38:51.467832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:08.742 [2024-11-15 11:38:51.467844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:08.742 [2024-11-15 11:38:51.467856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:08.742 [2024-11-15 11:38:51.467868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.742 [2024-11-15 11:38:51.467880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:08.742 [2024-11-15 11:38:51.467892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:08.742 [2024-11-15 11:38:51.467904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:08.742 [2024-11-15 11:38:51.467915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:08.742 [2024-11-15 11:38:51.467928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:08.742 [2024-11-15 11:38:51.467956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.742 [2024-11-15 11:38:51.467970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:08.742 [2024-11-15 11:38:51.467984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:08.742 [2024-11-15 11:38:51.467996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:08.742 [2024-11-15 11:38:51.468021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:08.742 [2024-11-15 11:38:51.468076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:08.742 [2024-11-15 11:38:51.468112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:08.742 [2024-11-15 11:38:51.468147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:08.742 [2024-11-15 11:38:51.468183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.742 [2024-11-15 11:38:51.468207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:08.742 [2024-11-15 11:38:51.468220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:08.742 [2024-11-15 11:38:51.468231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:08.742 [2024-11-15 11:38:51.468243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:08.742 [2024-11-15 11:38:51.468255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:08.742 [2024-11-15 11:38:51.468267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:08.742 [2024-11-15 11:38:51.468290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:08.742 [2024-11-15 11:38:51.468302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468313] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:08.742 [2024-11-15 11:38:51.468327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:08.742 [2024-11-15 11:38:51.468339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:08.742 [2024-11-15 11:38:51.468366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:08.742 [2024-11-15 11:38:51.468378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:08.742 [2024-11-15 11:38:51.468391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:08.742 [2024-11-15 11:38:51.468404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:08.742 [2024-11-15 11:38:51.468416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:08.742 [2024-11-15 11:38:51.468429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:08.742 [2024-11-15 11:38:51.468442] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:08.742 [2024-11-15 11:38:51.468458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:08.742 [2024-11-15 11:38:51.468494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:08.742 [2024-11-15 11:38:51.468507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:08.742 [2024-11-15 11:38:51.468519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:08.742 [2024-11-15 11:38:51.468532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:08.742 [2024-11-15 11:38:51.468544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:08.742 [2024-11-15 11:38:51.468558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:08.742 [2024-11-15 11:38:51.468570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:08.742 [2024-11-15 11:38:51.468583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:08.742 [2024-11-15 11:38:51.468595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:08.742 [2024-11-15 11:38:51.468659] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:08.742 [2024-11-15 11:38:51.468673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:08.742 [2024-11-15 11:38:51.468699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:08.742 [2024-11-15 11:38:51.468713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:08.742 [2024-11-15 11:38:51.468725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:08.742 [2024-11-15 11:38:51.468739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.468752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:08.742 [2024-11-15 11:38:51.468766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:27:08.742 [2024-11-15 11:38:51.468779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.504342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.504421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.742 [2024-11-15 11:38:51.504474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.494 ms 00:27:08.742 [2024-11-15 11:38:51.504496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.504601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.504621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:08.742 [2024-11-15 11:38:51.504635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:27:08.742 [2024-11-15 11:38:51.504647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.556809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.556880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.742 [2024-11-15 11:38:51.556917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.015 ms 00:27:08.742 [2024-11-15 11:38:51.556930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.556992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.557011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.742 [2024-11-15 11:38:51.557032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:08.742 [2024-11-15 11:38:51.557061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.557785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.557833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.742 [2024-11-15 11:38:51.557851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:27:08.742 [2024-11-15 11:38:51.557864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.558079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.558120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.742 [2024-11-15 11:38:51.558146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:27:08.742 [2024-11-15 11:38:51.558159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.742 [2024-11-15 11:38:51.575679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.742 [2024-11-15 11:38:51.575746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.742 [2024-11-15 11:38:51.575782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.485 ms 00:27:08.742 [2024-11-15 11:38:51.575795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.743 [2024-11-15 11:38:51.589824] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:08.743 [2024-11-15 11:38:51.589885] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:08.743 [2024-11-15 11:38:51.589923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.743 [2024-11-15 11:38:51.589936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:08.743 [2024-11-15 11:38:51.589950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.001 ms 00:27:08.743 [2024-11-15 11:38:51.589962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.743 [2024-11-15 11:38:51.613822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.743 [2024-11-15 11:38:51.613900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:08.743 [2024-11-15 11:38:51.613937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.812 ms 00:27:08.743 [2024-11-15 11:38:51.613950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.743 [2024-11-15 11:38:51.626793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.743 [2024-11-15 11:38:51.626856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:08.743 [2024-11-15 11:38:51.626891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.794 ms 00:27:08.743 [2024-11-15 11:38:51.626903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.743 [2024-11-15 11:38:51.639638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.743 [2024-11-15 11:38:51.639698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:08.743 [2024-11-15 11:38:51.639733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.690 ms 00:27:08.743 [2024-11-15 11:38:51.639745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.743 [2024-11-15 11:38:51.640614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.743 [2024-11-15 11:38:51.640668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:08.743 [2024-11-15 11:38:51.640709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:27:08.743 [2024-11-15 11:38:51.640721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.705715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.705807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:09.001 [2024-11-15 11:38:51.705854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.965 ms 00:27:09.001 [2024-11-15 11:38:51.705868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.716537] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:09.001 [2024-11-15 11:38:51.718920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.718974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:09.001 [2024-11-15 11:38:51.719009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.984 ms 00:27:09.001 [2024-11-15 11:38:51.719022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.719141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.719165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:09.001 [2024-11-15 11:38:51.719186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:09.001 [2024-11-15 11:38:51.719198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.720309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.720362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:09.001 [2024-11-15 11:38:51.720380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:27:09.001 [2024-11-15 11:38:51.720392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.720431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.720449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:09.001 [2024-11-15 11:38:51.720463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:09.001 [2024-11-15 11:38:51.720474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.720528] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:09.001 [2024-11-15 11:38:51.720547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.720560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:09.001 [2024-11-15 11:38:51.720589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:09.001 [2024-11-15 11:38:51.720635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.746366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.746444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:09.001 [2024-11-15 11:38:51.746488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:27:09.001 [2024-11-15 11:38:51.746501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.746586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.001 [2024-11-15 11:38:51.746608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:09.001 [2024-11-15 11:38:51.746622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:09.001 [2024-11-15 11:38:51.746633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.001 [2024-11-15 11:38:51.748301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.379 ms, result 0 00:27:10.377  [2024-11-15T11:38:54.261Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-15T11:38:55.197Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-15T11:38:56.132Z] Copying: 60/1024 [MB] (19 MBps) [2024-11-15T11:38:57.065Z] Copying: 80/1024 [MB] (20 MBps) [2024-11-15T11:38:58.001Z] Copying: 100/1024 [MB] (20 MBps) [2024-11-15T11:38:58.937Z] Copying: 120/1024 [MB] (20 MBps) [2024-11-15T11:39:00.313Z] Copying: 141/1024 [MB] (20 MBps) [2024-11-15T11:39:01.248Z] Copying: 161/1024 [MB] (20 MBps) [2024-11-15T11:39:02.183Z] Copying: 181/1024 [MB] (20 MBps) [2024-11-15T11:39:03.118Z] Copying: 201/1024 [MB] (20 MBps) [2024-11-15T11:39:04.150Z] Copying: 221/1024 [MB] (19 MBps) [2024-11-15T11:39:05.084Z] Copying: 242/1024 [MB] (20 MBps) [2024-11-15T11:39:06.019Z] Copying: 262/1024 [MB] (20 MBps) [2024-11-15T11:39:06.955Z] Copying: 282/1024 [MB] (19 MBps) [2024-11-15T11:39:08.330Z] Copying: 302/1024 [MB] (20 MBps) [2024-11-15T11:39:09.265Z] Copying: 322/1024 [MB] (20 MBps) [2024-11-15T11:39:10.200Z] Copying: 342/1024 [MB] (19 MBps) [2024-11-15T11:39:11.134Z] Copying: 362/1024 [MB] (20 MBps) [2024-11-15T11:39:12.070Z] Copying: 383/1024 [MB] (20 MBps) [2024-11-15T11:39:13.006Z] Copying: 403/1024 [MB] (20 MBps) [2024-11-15T11:39:13.942Z] Copying: 422/1024 [MB] (19 MBps) [2024-11-15T11:39:15.321Z] Copying: 442/1024 [MB] (20 MBps) [2024-11-15T11:39:16.257Z] Copying: 462/1024 [MB] (19 MBps) [2024-11-15T11:39:17.193Z] Copying: 483/1024 [MB] (20 MBps) [2024-11-15T11:39:18.127Z] Copying: 503/1024 [MB] (20 MBps) [2024-11-15T11:39:19.061Z] Copying: 523/1024 [MB] (20 MBps) [2024-11-15T11:39:19.996Z] Copying: 543/1024 [MB] (20 MBps) [2024-11-15T11:39:20.932Z] Copying: 564/1024 [MB] (20 MBps) [2024-11-15T11:39:22.307Z] Copying: 582/1024 [MB] (18 MBps) [2024-11-15T11:39:23.240Z] Copying: 602/1024 [MB] (20 MBps) [2024-11-15T11:39:24.173Z] Copying: 623/1024 [MB] (20 MBps) [2024-11-15T11:39:25.107Z] Copying: 643/1024 [MB] (20 MBps) [2024-11-15T11:39:26.040Z] Copying: 664/1024 [MB] (20 MBps) [2024-11-15T11:39:26.976Z] Copying: 684/1024 [MB] (20 MBps) [2024-11-15T11:39:28.352Z] Copying: 705/1024 [MB] (20 MBps) [2024-11-15T11:39:29.287Z] Copying: 725/1024 [MB] (20 MBps) [2024-11-15T11:39:30.222Z] Copying: 746/1024 [MB] (20 MBps) [2024-11-15T11:39:31.158Z] Copying: 767/1024 [MB] (20 MBps) [2024-11-15T11:39:32.092Z] Copying: 787/1024 [MB] (20 MBps) [2024-11-15T11:39:33.034Z] Copying: 807/1024 [MB] (20 MBps) [2024-11-15T11:39:33.969Z] Copying: 828/1024 [MB] (20 MBps) [2024-11-15T11:39:34.971Z] Copying: 849/1024 [MB] (20 MBps) [2024-11-15T11:39:35.938Z] Copying: 869/1024 [MB] (20 MBps) [2024-11-15T11:39:37.314Z] Copying: 889/1024 [MB] (20 MBps) [2024-11-15T11:39:38.249Z] Copying: 909/1024 [MB] (19 MBps) [2024-11-15T11:39:39.184Z] Copying: 929/1024 [MB] (19 MBps) [2024-11-15T11:39:40.119Z] Copying: 950/1024 [MB] (21 MBps) [2024-11-15T11:39:41.053Z] Copying: 973/1024 [MB] (22 MBps) [2024-11-15T11:39:41.988Z] Copying: 996/1024 [MB] (22 MBps) [2024-11-15T11:39:42.247Z] Copying: 1018/1024 [MB] (22 MBps) [2024-11-15T11:39:42.507Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-15 11:39:42.300533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.300620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:59.558 [2024-11-15 11:39:42.300644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:59.558 [2024-11-15 11:39:42.300657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.300692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:59.558 [2024-11-15 11:39:42.304890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.304946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:59.558 [2024-11-15 11:39:42.304976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.174 ms 00:27:59.558 [2024-11-15 11:39:42.304987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.305278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.305299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:59.558 [2024-11-15 11:39:42.305312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:59.558 [2024-11-15 11:39:42.305323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.308863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.308910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:59.558 [2024-11-15 11:39:42.308940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:27:59.558 [2024-11-15 11:39:42.308957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.314571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.314602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:59.558 [2024-11-15 11:39:42.314630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:27:59.558 [2024-11-15 11:39:42.314647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.340592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.340631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:59.558 [2024-11-15 11:39:42.340662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.880 ms 00:27:59.558 [2024-11-15 11:39:42.340672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.355956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.356004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:59.558 [2024-11-15 11:39:42.356036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.259 ms 00:27:59.558 [2024-11-15 11:39:42.356063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.357945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.358005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:59.558 [2024-11-15 11:39:42.358036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.845 ms 00:27:59.558 [2024-11-15 11:39:42.358059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.383023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.383070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:59.558 [2024-11-15 11:39:42.383100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.926 ms 00:27:59.558 [2024-11-15 11:39:42.383110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.407394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.407444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:59.558 [2024-11-15 11:39:42.407474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.262 ms 00:27:59.558 [2024-11-15 11:39:42.407483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.431759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.431800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:59.558 [2024-11-15 11:39:42.431831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.253 ms 00:27:59.558 [2024-11-15 11:39:42.431841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.455892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.558 [2024-11-15 11:39:42.455933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:59.558 [2024-11-15 11:39:42.455963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.000 ms 00:27:59.558 [2024-11-15 11:39:42.455974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.558 [2024-11-15 11:39:42.455996] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:59.558 [2024-11-15 11:39:42.456020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.558 [2024-11-15 11:39:42.456036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:59.558 [2024-11-15 11:39:42.456060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:59.558 [2024-11-15 11:39:42.456441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.456995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:59.559 [2024-11-15 11:39:42.457157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:59.559 [2024-11-15 11:39:42.457168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6b15c40c-275b-44da-9182-71512f00bc3a 00:27:59.559 [2024-11-15 11:39:42.457179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:59.559 [2024-11-15 11:39:42.457189] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:59.559 [2024-11-15 11:39:42.457200] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:59.559 [2024-11-15 11:39:42.457210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:59.559 [2024-11-15 11:39:42.457220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:59.559 [2024-11-15 11:39:42.457230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:59.559 [2024-11-15 11:39:42.457253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:59.559 [2024-11-15 11:39:42.457263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:59.559 [2024-11-15 11:39:42.457273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:59.559 [2024-11-15 11:39:42.457283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.559 [2024-11-15 11:39:42.457294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:59.559 [2024-11-15 11:39:42.457306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:27:59.559 [2024-11-15 11:39:42.457321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.559 [2024-11-15 11:39:42.472178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.559 [2024-11-15 11:39:42.472228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:59.559 [2024-11-15 11:39:42.472258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.834 ms 00:27:59.559 [2024-11-15 11:39:42.472270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.559 [2024-11-15 11:39:42.472741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:59.559 [2024-11-15 11:39:42.472774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:59.559 [2024-11-15 11:39:42.472788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:27:59.559 [2024-11-15 11:39:42.472799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.509410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.509454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:59.819 [2024-11-15 11:39:42.509483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.509494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.509546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.509566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:59.819 [2024-11-15 11:39:42.509585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.509594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.509716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.509750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:59.819 [2024-11-15 11:39:42.509761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.509772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.509792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.509805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:59.819 [2024-11-15 11:39:42.509823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.509833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.593771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.593830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:59.819 [2024-11-15 11:39:42.593863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.593874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.662366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.662425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:59.819 [2024-11-15 11:39:42.662458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.662468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.662538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.662554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:59.819 [2024-11-15 11:39:42.662566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.662576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.662639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.662655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:59.819 [2024-11-15 11:39:42.662666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.662682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.662828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.662848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:59.819 [2024-11-15 11:39:42.662861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.662871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.662919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.662937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:59.819 [2024-11-15 11:39:42.662950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.662959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.663008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.663050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:59.819 [2024-11-15 11:39:42.663081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.663093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.663154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:59.819 [2024-11-15 11:39:42.663171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:59.819 [2024-11-15 11:39:42.663183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:59.819 [2024-11-15 11:39:42.663200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:59.819 [2024-11-15 11:39:42.663345] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.780 ms, result 0 00:28:00.754 00:28:00.754 00:28:00.754 11:39:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:02.654 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78617 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78617 ']' 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78617 00:28:02.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78617) - No such process 00:28:02.654 Process with pid 78617 is not found 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78617 is not found' 00:28:02.654 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:02.913 Remove shared memory files 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:02.913 00:28:02.913 real 4m6.781s 00:28:02.913 user 4m40.752s 00:28:02.913 sys 0m37.344s 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:02.913 ************************************ 00:28:02.913 11:39:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.913 END TEST ftl_dirty_shutdown 00:28:02.913 ************************************ 00:28:03.172 11:39:45 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:03.172 11:39:45 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:03.172 11:39:45 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:03.172 11:39:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:03.172 ************************************ 00:28:03.172 START TEST ftl_upgrade_shutdown 00:28:03.172 ************************************ 00:28:03.172 11:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:03.172 * Looking for test storage... 00:28:03.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:03.172 11:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:03.172 11:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:03.172 11:39:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.172 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.173 --rc genhtml_branch_coverage=1 00:28:03.173 --rc genhtml_function_coverage=1 00:28:03.173 --rc genhtml_legend=1 00:28:03.173 --rc geninfo_all_blocks=1 00:28:03.173 --rc geninfo_unexecuted_blocks=1 00:28:03.173 00:28:03.173 ' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.173 --rc genhtml_branch_coverage=1 00:28:03.173 --rc genhtml_function_coverage=1 00:28:03.173 --rc genhtml_legend=1 00:28:03.173 --rc geninfo_all_blocks=1 00:28:03.173 --rc geninfo_unexecuted_blocks=1 00:28:03.173 00:28:03.173 ' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.173 --rc genhtml_branch_coverage=1 00:28:03.173 --rc genhtml_function_coverage=1 00:28:03.173 --rc genhtml_legend=1 00:28:03.173 --rc geninfo_all_blocks=1 00:28:03.173 --rc geninfo_unexecuted_blocks=1 00:28:03.173 00:28:03.173 ' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:03.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.173 --rc genhtml_branch_coverage=1 00:28:03.173 --rc genhtml_function_coverage=1 00:28:03.173 --rc genhtml_legend=1 00:28:03.173 --rc geninfo_all_blocks=1 00:28:03.173 --rc geninfo_unexecuted_blocks=1 00:28:03.173 00:28:03.173 ' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81229 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81229 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81229 ']' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:03.173 11:39:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.432 [2024-11-15 11:39:46.223701] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:03.432 [2024-11-15 11:39:46.223876] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81229 ] 00:28:03.691 [2024-11-15 11:39:46.414145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.691 [2024-11-15 11:39:46.561819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:04.628 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:04.887 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:05.146 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:05.146 { 00:28:05.146 "name": "basen1", 00:28:05.146 "aliases": [ 00:28:05.146 "59215f29-0304-4b5b-a520-742648f8d0a8" 00:28:05.146 ], 00:28:05.146 "product_name": "NVMe disk", 00:28:05.146 "block_size": 4096, 00:28:05.146 "num_blocks": 1310720, 00:28:05.146 "uuid": "59215f29-0304-4b5b-a520-742648f8d0a8", 00:28:05.146 "numa_id": -1, 00:28:05.146 "assigned_rate_limits": { 00:28:05.146 "rw_ios_per_sec": 0, 00:28:05.146 "rw_mbytes_per_sec": 0, 00:28:05.146 "r_mbytes_per_sec": 0, 00:28:05.146 "w_mbytes_per_sec": 0 00:28:05.146 }, 00:28:05.146 "claimed": true, 00:28:05.146 "claim_type": "read_many_write_one", 00:28:05.146 "zoned": false, 00:28:05.146 "supported_io_types": { 00:28:05.146 "read": true, 00:28:05.146 "write": true, 00:28:05.146 "unmap": true, 00:28:05.146 "flush": true, 00:28:05.146 "reset": true, 00:28:05.146 "nvme_admin": true, 00:28:05.146 "nvme_io": true, 00:28:05.146 "nvme_io_md": false, 00:28:05.146 "write_zeroes": true, 00:28:05.146 "zcopy": false, 00:28:05.146 "get_zone_info": false, 00:28:05.146 "zone_management": false, 00:28:05.146 "zone_append": false, 00:28:05.146 "compare": true, 00:28:05.146 "compare_and_write": false, 00:28:05.146 "abort": true, 00:28:05.146 "seek_hole": false, 00:28:05.146 "seek_data": false, 00:28:05.146 "copy": true, 00:28:05.146 "nvme_iov_md": false 00:28:05.146 }, 00:28:05.146 "driver_specific": { 00:28:05.146 "nvme": [ 00:28:05.146 { 00:28:05.146 "pci_address": "0000:00:11.0", 00:28:05.146 "trid": { 00:28:05.146 "trtype": "PCIe", 00:28:05.146 "traddr": "0000:00:11.0" 00:28:05.146 }, 00:28:05.146 "ctrlr_data": { 00:28:05.146 "cntlid": 0, 00:28:05.146 "vendor_id": "0x1b36", 00:28:05.146 "model_number": "QEMU NVMe Ctrl", 00:28:05.146 "serial_number": "12341", 00:28:05.146 "firmware_revision": "8.0.0", 00:28:05.146 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:05.146 "oacs": { 00:28:05.146 "security": 0, 00:28:05.146 "format": 1, 00:28:05.146 "firmware": 0, 00:28:05.146 "ns_manage": 1 00:28:05.146 }, 00:28:05.146 "multi_ctrlr": false, 00:28:05.146 "ana_reporting": false 00:28:05.146 }, 00:28:05.146 "vs": { 00:28:05.146 "nvme_version": "1.4" 00:28:05.146 }, 00:28:05.146 "ns_data": { 00:28:05.146 "id": 1, 00:28:05.146 "can_share": false 00:28:05.146 } 00:28:05.146 } 00:28:05.146 ], 00:28:05.146 "mp_policy": "active_passive" 00:28:05.146 } 00:28:05.146 } 00:28:05.146 ]' 00:28:05.146 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:05.146 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:05.146 11:39:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:05.146 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:28:05.146 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:28:05.146 11:39:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:28:05.146 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:05.147 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:05.147 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:05.147 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:05.147 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:05.406 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21 00:28:05.406 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:05.406 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27c6cf49-6fb3-4bf7-a7f8-85d47bcb5d21 00:28:05.665 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:06.232 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b6658e98-7c35-40b2-b296-b16c47193cb6 00:28:06.232 11:39:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b6658e98-7c35-40b2-b296-b16c47193cb6 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 ]] 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 5120 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:06.232 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 00:28:06.491 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:06.491 { 00:28:06.491 "name": "d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8", 00:28:06.491 "aliases": [ 00:28:06.491 "lvs/basen1p0" 00:28:06.491 ], 00:28:06.491 "product_name": "Logical Volume", 00:28:06.491 "block_size": 4096, 00:28:06.491 "num_blocks": 5242880, 00:28:06.491 "uuid": "d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8", 00:28:06.491 "assigned_rate_limits": { 00:28:06.491 "rw_ios_per_sec": 0, 00:28:06.491 "rw_mbytes_per_sec": 0, 00:28:06.491 "r_mbytes_per_sec": 0, 00:28:06.491 "w_mbytes_per_sec": 0 00:28:06.491 }, 00:28:06.491 "claimed": false, 00:28:06.491 "zoned": false, 00:28:06.491 "supported_io_types": { 00:28:06.491 "read": true, 00:28:06.491 "write": true, 00:28:06.491 "unmap": true, 00:28:06.491 "flush": false, 00:28:06.491 "reset": true, 00:28:06.491 "nvme_admin": false, 00:28:06.491 "nvme_io": false, 00:28:06.491 "nvme_io_md": false, 00:28:06.491 "write_zeroes": true, 00:28:06.491 "zcopy": false, 00:28:06.491 "get_zone_info": false, 00:28:06.491 "zone_management": false, 00:28:06.491 "zone_append": false, 00:28:06.491 "compare": false, 00:28:06.491 "compare_and_write": false, 00:28:06.491 "abort": false, 00:28:06.491 "seek_hole": true, 00:28:06.491 "seek_data": true, 00:28:06.491 "copy": false, 00:28:06.491 "nvme_iov_md": false 00:28:06.491 }, 00:28:06.491 "driver_specific": { 00:28:06.491 "lvol": { 00:28:06.491 "lvol_store_uuid": "b6658e98-7c35-40b2-b296-b16c47193cb6", 00:28:06.491 "base_bdev": "basen1", 00:28:06.491 "thin_provision": true, 00:28:06.491 "num_allocated_clusters": 0, 00:28:06.491 "snapshot": false, 00:28:06.491 "clone": false, 00:28:06.491 "esnap_clone": false 00:28:06.491 } 00:28:06.491 } 00:28:06.491 } 00:28:06.491 ]' 00:28:06.491 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:06.491 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:06.491 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:06.750 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:07.009 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:07.009 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:07.009 11:39:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:07.268 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:07.268 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:07.268 11:39:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d137f73e-5e55-4a7e-b5d3-cc8274f2f5d8 -c cachen1p0 --l2p_dram_limit 2 00:28:07.268 [2024-11-15 11:39:50.197222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.268 [2024-11-15 11:39:50.197275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:07.268 [2024-11-15 11:39:50.197298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.268 [2024-11-15 11:39:50.197311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.268 [2024-11-15 11:39:50.197399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.268 [2024-11-15 11:39:50.197430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:07.268 [2024-11-15 11:39:50.197461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:28:07.268 [2024-11-15 11:39:50.197472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.268 [2024-11-15 11:39:50.197520] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:07.269 [2024-11-15 11:39:50.198395] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:07.269 [2024-11-15 11:39:50.198442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.269 [2024-11-15 11:39:50.198455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:07.269 [2024-11-15 11:39:50.198485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:28:07.269 [2024-11-15 11:39:50.198496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.269 [2024-11-15 11:39:50.198626] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 01844c90-77c9-4ba7-a387-4a9294e659ec 00:28:07.269 [2024-11-15 11:39:50.200588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.269 [2024-11-15 11:39:50.200638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:07.269 [2024-11-15 11:39:50.200652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:07.269 [2024-11-15 11:39:50.200665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.269 [2024-11-15 11:39:50.211370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.269 [2024-11-15 11:39:50.211445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:07.269 [2024-11-15 11:39:50.211492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.643 ms 00:28:07.269 [2024-11-15 11:39:50.211522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.269 [2024-11-15 11:39:50.211583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.269 [2024-11-15 11:39:50.211603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:07.269 [2024-11-15 11:39:50.211615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:28:07.269 [2024-11-15 11:39:50.211631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.269 [2024-11-15 11:39:50.211705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.269 [2024-11-15 11:39:50.211734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:07.269 [2024-11-15 11:39:50.211755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:07.269 [2024-11-15 11:39:50.211790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.269 [2024-11-15 11:39:50.211822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:07.528 [2024-11-15 11:39:50.217401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.528 [2024-11-15 11:39:50.217476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:07.528 [2024-11-15 11:39:50.217508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.568 ms 00:28:07.528 [2024-11-15 11:39:50.217519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.528 [2024-11-15 11:39:50.217553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.528 [2024-11-15 11:39:50.217566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:07.528 [2024-11-15 11:39:50.217579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:07.528 [2024-11-15 11:39:50.217589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.528 [2024-11-15 11:39:50.217631] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:07.528 [2024-11-15 11:39:50.217801] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:07.528 [2024-11-15 11:39:50.217830] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:07.528 [2024-11-15 11:39:50.217845] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:07.528 [2024-11-15 11:39:50.217861] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:07.528 [2024-11-15 11:39:50.217873] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:07.528 [2024-11-15 11:39:50.217887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:07.528 [2024-11-15 11:39:50.217897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:07.528 [2024-11-15 11:39:50.217913] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:07.528 [2024-11-15 11:39:50.217923] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:07.529 [2024-11-15 11:39:50.217936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.529 [2024-11-15 11:39:50.217946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:07.529 [2024-11-15 11:39:50.217960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:28:07.529 [2024-11-15 11:39:50.217970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.529 [2024-11-15 11:39:50.218095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.529 [2024-11-15 11:39:50.218114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:07.529 [2024-11-15 11:39:50.218137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.096 ms 00:28:07.529 [2024-11-15 11:39:50.218161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.529 [2024-11-15 11:39:50.218285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:07.529 [2024-11-15 11:39:50.218302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:07.529 [2024-11-15 11:39:50.218317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:07.529 [2024-11-15 11:39:50.218372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:07.529 [2024-11-15 11:39:50.218442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:07.529 [2024-11-15 11:39:50.218468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:07.529 [2024-11-15 11:39:50.218477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:07.529 [2024-11-15 11:39:50.218497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:07.529 [2024-11-15 11:39:50.218508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:07.529 [2024-11-15 11:39:50.218528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:07.529 [2024-11-15 11:39:50.218537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:07.529 [2024-11-15 11:39:50.218559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:07.529 [2024-11-15 11:39:50.218573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:07.529 [2024-11-15 11:39:50.218595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:07.529 [2024-11-15 11:39:50.218625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:07.529 [2024-11-15 11:39:50.218657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:07.529 [2024-11-15 11:39:50.218686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:07.529 [2024-11-15 11:39:50.218728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:07.529 [2024-11-15 11:39:50.218758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:07.529 [2024-11-15 11:39:50.218789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:07.529 [2024-11-15 11:39:50.218819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:07.529 [2024-11-15 11:39:50.218830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218839] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:07.529 [2024-11-15 11:39:50.218851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:07.529 [2024-11-15 11:39:50.218861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.529 [2024-11-15 11:39:50.218885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:07.529 [2024-11-15 11:39:50.218899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:07.529 [2024-11-15 11:39:50.218908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:07.529 [2024-11-15 11:39:50.218921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:07.529 [2024-11-15 11:39:50.218930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:07.529 [2024-11-15 11:39:50.218942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:07.529 [2024-11-15 11:39:50.218956] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:07.529 [2024-11-15 11:39:50.218971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.218985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:07.529 [2024-11-15 11:39:50.218997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:07.529 [2024-11-15 11:39:50.219029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:07.529 [2024-11-15 11:39:50.219057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:07.529 [2024-11-15 11:39:50.219085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:07.529 [2024-11-15 11:39:50.219099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:07.529 [2024-11-15 11:39:50.219229] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:07.529 [2024-11-15 11:39:50.219245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.529 [2024-11-15 11:39:50.219272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:07.529 [2024-11-15 11:39:50.219284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:07.529 [2024-11-15 11:39:50.219298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:07.529 [2024-11-15 11:39:50.219311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.529 [2024-11-15 11:39:50.219325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:07.529 [2024-11-15 11:39:50.219338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.098 ms 00:28:07.529 [2024-11-15 11:39:50.219351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.529 [2024-11-15 11:39:50.219458] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:07.529 [2024-11-15 11:39:50.219478] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:10.816 [2024-11-15 11:39:53.030955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.031024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:10.816 [2024-11-15 11:39:53.031054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2811.508 ms 00:28:10.816 [2024-11-15 11:39:53.031069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.063370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.063418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:10.816 [2024-11-15 11:39:53.063437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.069 ms 00:28:10.816 [2024-11-15 11:39:53.063450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.063574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.063596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:10.816 [2024-11-15 11:39:53.063608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:10.816 [2024-11-15 11:39:53.063626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.100891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.100933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:10.816 [2024-11-15 11:39:53.100949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.215 ms 00:28:10.816 [2024-11-15 11:39:53.100961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.101002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.101024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:10.816 [2024-11-15 11:39:53.101098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:10.816 [2024-11-15 11:39:53.101116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.101710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.101731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:10.816 [2024-11-15 11:39:53.101743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:28:10.816 [2024-11-15 11:39:53.101755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.816 [2024-11-15 11:39:53.101814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.816 [2024-11-15 11:39:53.101830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:10.817 [2024-11-15 11:39:53.101844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:10.817 [2024-11-15 11:39:53.101858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.119757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.119797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:10.817 [2024-11-15 11:39:53.119812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.876 ms 00:28:10.817 [2024-11-15 11:39:53.119824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.142892] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:10.817 [2024-11-15 11:39:53.144262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.144304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:10.817 [2024-11-15 11:39:53.144322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.349 ms 00:28:10.817 [2024-11-15 11:39:53.144333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.169569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.169605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:10.817 [2024-11-15 11:39:53.169624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.197 ms 00:28:10.817 [2024-11-15 11:39:53.169635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.169737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.169759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:10.817 [2024-11-15 11:39:53.169776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:10.817 [2024-11-15 11:39:53.169786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.194007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.194051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:10.817 [2024-11-15 11:39:53.194070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.146 ms 00:28:10.817 [2024-11-15 11:39:53.194081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.218156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.218189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:10.817 [2024-11-15 11:39:53.218206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.023 ms 00:28:10.817 [2024-11-15 11:39:53.218217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.218904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.218934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:10.817 [2024-11-15 11:39:53.218951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:28:10.817 [2024-11-15 11:39:53.218965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.295193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.295250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:10.817 [2024-11-15 11:39:53.295274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.153 ms 00:28:10.817 [2024-11-15 11:39:53.295287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.323316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.323367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:10.817 [2024-11-15 11:39:53.323399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.931 ms 00:28:10.817 [2024-11-15 11:39:53.323412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.348244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.348278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:10.817 [2024-11-15 11:39:53.348294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.786 ms 00:28:10.817 [2024-11-15 11:39:53.348304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.372996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.373038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:10.817 [2024-11-15 11:39:53.373057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.646 ms 00:28:10.817 [2024-11-15 11:39:53.373068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.373154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.373171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:10.817 [2024-11-15 11:39:53.373189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:10.817 [2024-11-15 11:39:53.373200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.373293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:10.817 [2024-11-15 11:39:53.373310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:10.817 [2024-11-15 11:39:53.373343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:10.817 [2024-11-15 11:39:53.373369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:10.817 [2024-11-15 11:39:53.374840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3177.054 ms, result 0 00:28:10.817 { 00:28:10.817 "name": "ftl", 00:28:10.817 "uuid": "01844c90-77c9-4ba7-a387-4a9294e659ec" 00:28:10.817 } 00:28:10.817 11:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:10.817 [2024-11-15 11:39:53.689722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:10.817 11:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:11.076 11:39:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:11.334 [2024-11-15 11:39:54.234231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:11.334 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:11.593 [2024-11-15 11:39:54.495571] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:11.593 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:12.160 Fill FTL, iteration 1 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81352 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:12.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81352 /var/tmp/spdk.tgt.sock 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81352 ']' 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:12.160 11:39:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:12.161 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:12.161 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:12.161 11:39:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.161 [2024-11-15 11:39:54.985951] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:12.161 [2024-11-15 11:39:54.986344] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81352 ] 00:28:12.419 [2024-11-15 11:39:55.164665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.419 [2024-11-15 11:39:55.267345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.353 11:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:13.353 11:39:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:13.353 11:39:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:13.353 ftln1 00:28:13.353 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:13.353 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81352 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81352 ']' 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81352 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81352 00:28:13.612 killing process with pid 81352 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81352' 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81352 00:28:13.612 11:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81352 00:28:15.516 11:39:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:15.516 11:39:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:15.516 [2024-11-15 11:39:58.396536] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:15.516 [2024-11-15 11:39:58.397104] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81399 ] 00:28:15.774 [2024-11-15 11:39:58.575689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.774 [2024-11-15 11:39:58.675905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.148  [2024-11-15T11:40:01.473Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-15T11:40:02.408Z] Copying: 428/1024 [MB] (215 MBps) [2024-11-15T11:40:03.344Z] Copying: 644/1024 [MB] (216 MBps) [2024-11-15T11:40:03.912Z] Copying: 858/1024 [MB] (214 MBps) [2024-11-15T11:40:04.848Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:28:21.899 00:28:21.899 Calculate MD5 checksum, iteration 1 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:21.899 11:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:22.178 [2024-11-15 11:40:04.890567] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:22.178 [2024-11-15 11:40:04.891009] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81469 ] 00:28:22.178 [2024-11-15 11:40:05.071458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.465 [2024-11-15 11:40:05.172147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.852  [2024-11-15T11:40:07.739Z] Copying: 451/1024 [MB] (451 MBps) [2024-11-15T11:40:07.997Z] Copying: 905/1024 [MB] (454 MBps) [2024-11-15T11:40:08.933Z] Copying: 1024/1024 [MB] (average 453 MBps) 00:28:25.984 00:28:25.984 11:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:25.984 11:40:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:27.891 Fill FTL, iteration 2 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b759a46a741a4f80fae9e53640c530b6 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:27.891 11:40:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:27.891 [2024-11-15 11:40:10.443797] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:27.891 [2024-11-15 11:40:10.443942] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81525 ] 00:28:27.891 [2024-11-15 11:40:10.620716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.891 [2024-11-15 11:40:10.770184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.271  [2024-11-15T11:40:13.598Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-15T11:40:14.534Z] Copying: 427/1024 [MB] (211 MBps) [2024-11-15T11:40:15.469Z] Copying: 642/1024 [MB] (215 MBps) [2024-11-15T11:40:16.037Z] Copying: 852/1024 [MB] (210 MBps) [2024-11-15T11:40:16.977Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:28:34.028 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:34.028 Calculate MD5 checksum, iteration 2 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:34.028 11:40:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:34.288 [2024-11-15 11:40:17.006320] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:34.288 [2024-11-15 11:40:17.006459] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81595 ] 00:28:34.288 [2024-11-15 11:40:17.172880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.547 [2024-11-15 11:40:17.284015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.927  [2024-11-15T11:40:20.256Z] Copying: 453/1024 [MB] (453 MBps) [2024-11-15T11:40:20.256Z] Copying: 907/1024 [MB] (454 MBps) [2024-11-15T11:40:21.194Z] Copying: 1024/1024 [MB] (average 452 MBps) 00:28:38.245 00:28:38.245 11:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:38.245 11:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:40.151 11:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:40.151 11:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0117f52417497a97051b120d08eec57a 00:28:40.151 11:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:40.151 11:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:40.151 11:40:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:40.410 [2024-11-15 11:40:23.212346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.410 [2024-11-15 11:40:23.212397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:40.410 [2024-11-15 11:40:23.212415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:40.410 [2024-11-15 11:40:23.212427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.410 [2024-11-15 11:40:23.212458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.410 [2024-11-15 11:40:23.212477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:40.410 [2024-11-15 11:40:23.212494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:40.410 [2024-11-15 11:40:23.212505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.410 [2024-11-15 11:40:23.212536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.410 [2024-11-15 11:40:23.212550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:40.410 [2024-11-15 11:40:23.212561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:40.410 [2024-11-15 11:40:23.212571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.410 [2024-11-15 11:40:23.212639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.293 ms, result 0 00:28:40.410 true 00:28:40.410 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:40.670 { 00:28:40.670 "name": "ftl", 00:28:40.670 "properties": [ 00:28:40.670 { 00:28:40.670 "name": "superblock_version", 00:28:40.670 "value": 5, 00:28:40.670 "read-only": true 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "name": "base_device", 00:28:40.670 "bands": [ 00:28:40.670 { 00:28:40.670 "id": 0, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 1, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 2, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 3, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 4, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 5, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 6, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 7, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 8, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 9, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 10, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 11, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 12, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 13, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 14, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 15, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 16, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 17, 00:28:40.670 "state": "FREE", 00:28:40.670 "validity": 0.0 00:28:40.670 } 00:28:40.670 ], 00:28:40.670 "read-only": true 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "name": "cache_device", 00:28:40.670 "type": "bdev", 00:28:40.670 "chunks": [ 00:28:40.670 { 00:28:40.670 "id": 0, 00:28:40.670 "state": "INACTIVE", 00:28:40.670 "utilization": 0.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 1, 00:28:40.670 "state": "CLOSED", 00:28:40.670 "utilization": 1.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 2, 00:28:40.670 "state": "CLOSED", 00:28:40.670 "utilization": 1.0 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 3, 00:28:40.670 "state": "OPEN", 00:28:40.670 "utilization": 0.001953125 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "id": 4, 00:28:40.670 "state": "OPEN", 00:28:40.670 "utilization": 0.0 00:28:40.670 } 00:28:40.670 ], 00:28:40.670 "read-only": true 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "name": "verbose_mode", 00:28:40.670 "value": true, 00:28:40.670 "unit": "", 00:28:40.670 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:40.670 }, 00:28:40.670 { 00:28:40.670 "name": "prep_upgrade_on_shutdown", 00:28:40.670 "value": false, 00:28:40.670 "unit": "", 00:28:40.670 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:40.670 } 00:28:40.670 ] 00:28:40.670 } 00:28:40.670 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:40.930 [2024-11-15 11:40:23.704769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.930 [2024-11-15 11:40:23.705017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:40.930 [2024-11-15 11:40:23.705212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:40.930 [2024-11-15 11:40:23.705274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.930 [2024-11-15 11:40:23.705343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.930 [2024-11-15 11:40:23.705517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:40.930 [2024-11-15 11:40:23.705609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:40.930 [2024-11-15 11:40:23.705649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.930 [2024-11-15 11:40:23.705703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:40.930 [2024-11-15 11:40:23.705745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:40.930 [2024-11-15 11:40:23.705779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:40.930 [2024-11-15 11:40:23.705887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:40.930 [2024-11-15 11:40:23.706004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.219 ms, result 0 00:28:40.930 true 00:28:40.930 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:40.930 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:40.930 11:40:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:41.189 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:41.189 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:41.189 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:41.449 [2024-11-15 11:40:24.217212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.449 [2024-11-15 11:40:24.217260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:41.449 [2024-11-15 11:40:24.217304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:41.449 [2024-11-15 11:40:24.217314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.449 [2024-11-15 11:40:24.217344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.449 [2024-11-15 11:40:24.217359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:41.449 [2024-11-15 11:40:24.217385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:41.449 [2024-11-15 11:40:24.217394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.449 [2024-11-15 11:40:24.217417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.449 [2024-11-15 11:40:24.217429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:41.449 [2024-11-15 11:40:24.217439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:41.449 [2024-11-15 11:40:24.217449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.449 [2024-11-15 11:40:24.217510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.285 ms, result 0 00:28:41.449 true 00:28:41.449 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:41.708 { 00:28:41.708 "name": "ftl", 00:28:41.709 "properties": [ 00:28:41.709 { 00:28:41.709 "name": "superblock_version", 00:28:41.709 "value": 5, 00:28:41.709 "read-only": true 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "name": "base_device", 00:28:41.709 "bands": [ 00:28:41.709 { 00:28:41.709 "id": 0, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 1, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 2, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 3, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 4, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 5, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 6, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 7, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 8, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 9, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 10, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 11, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 12, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 13, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 14, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 15, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 16, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 17, 00:28:41.709 "state": "FREE", 00:28:41.709 "validity": 0.0 00:28:41.709 } 00:28:41.709 ], 00:28:41.709 "read-only": true 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "name": "cache_device", 00:28:41.709 "type": "bdev", 00:28:41.709 "chunks": [ 00:28:41.709 { 00:28:41.709 "id": 0, 00:28:41.709 "state": "INACTIVE", 00:28:41.709 "utilization": 0.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 1, 00:28:41.709 "state": "CLOSED", 00:28:41.709 "utilization": 1.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 2, 00:28:41.709 "state": "CLOSED", 00:28:41.709 "utilization": 1.0 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 3, 00:28:41.709 "state": "OPEN", 00:28:41.709 "utilization": 0.001953125 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "id": 4, 00:28:41.709 "state": "OPEN", 00:28:41.709 "utilization": 0.0 00:28:41.709 } 00:28:41.709 ], 00:28:41.709 "read-only": true 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "name": "verbose_mode", 00:28:41.709 "value": true, 00:28:41.709 "unit": "", 00:28:41.709 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:41.709 }, 00:28:41.709 { 00:28:41.709 "name": "prep_upgrade_on_shutdown", 00:28:41.709 "value": true, 00:28:41.709 "unit": "", 00:28:41.709 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:41.709 } 00:28:41.709 ] 00:28:41.709 } 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81229 ]] 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81229 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81229 ']' 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81229 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81229 00:28:41.709 killing process with pid 81229 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81229' 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81229 00:28:41.709 11:40:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81229 00:28:42.648 [2024-11-15 11:40:25.313864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:42.648 [2024-11-15 11:40:25.329612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.648 [2024-11-15 11:40:25.329694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:42.648 [2024-11-15 11:40:25.329714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:42.648 [2024-11-15 11:40:25.329740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.648 [2024-11-15 11:40:25.329769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:42.648 [2024-11-15 11:40:25.332920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.648 [2024-11-15 11:40:25.333157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:42.648 [2024-11-15 11:40:25.333186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.131 ms 00:28:42.648 [2024-11-15 11:40:25.333199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.396885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.396962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:50.796 [2024-11-15 11:40:33.396983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8063.678 ms 00:28:50.796 [2024-11-15 11:40:33.396999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.398234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.398267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:50.796 [2024-11-15 11:40:33.398282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.214 ms 00:28:50.796 [2024-11-15 11:40:33.398295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.399435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.399466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:50.796 [2024-11-15 11:40:33.399495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:28:50.796 [2024-11-15 11:40:33.399506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.410408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.410600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:50.796 [2024-11-15 11:40:33.410627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.702 ms 00:28:50.796 [2024-11-15 11:40:33.410640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.417448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.417661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:50.796 [2024-11-15 11:40:33.417688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.764 ms 00:28:50.796 [2024-11-15 11:40:33.417701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.417814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.417843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:50.796 [2024-11-15 11:40:33.417856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:50.796 [2024-11-15 11:40:33.417875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.427995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.428043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:50.796 [2024-11-15 11:40:33.428075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.098 ms 00:28:50.796 [2024-11-15 11:40:33.428085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.438138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.438174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:50.796 [2024-11-15 11:40:33.438188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.017 ms 00:28:50.796 [2024-11-15 11:40:33.438197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.447956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.448161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:50.796 [2024-11-15 11:40:33.448186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.723 ms 00:28:50.796 [2024-11-15 11:40:33.448199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.458096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.796 [2024-11-15 11:40:33.458132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:50.796 [2024-11-15 11:40:33.458146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.819 ms 00:28:50.796 [2024-11-15 11:40:33.458155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.796 [2024-11-15 11:40:33.458189] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:50.796 [2024-11-15 11:40:33.458209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:50.796 [2024-11-15 11:40:33.458222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:50.797 [2024-11-15 11:40:33.458246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:50.797 [2024-11-15 11:40:33.458257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:50.797 [2024-11-15 11:40:33.458406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:50.797 [2024-11-15 11:40:33.458416] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 01844c90-77c9-4ba7-a387-4a9294e659ec 00:28:50.797 [2024-11-15 11:40:33.458426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:50.797 [2024-11-15 11:40:33.458435] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:50.797 [2024-11-15 11:40:33.458444] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:50.797 [2024-11-15 11:40:33.458453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:50.797 [2024-11-15 11:40:33.458462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:50.797 [2024-11-15 11:40:33.458472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:50.797 [2024-11-15 11:40:33.458486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:50.797 [2024-11-15 11:40:33.458494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:50.797 [2024-11-15 11:40:33.458510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:50.797 [2024-11-15 11:40:33.458521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.797 [2024-11-15 11:40:33.458531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:50.797 [2024-11-15 11:40:33.458546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.333 ms 00:28:50.797 [2024-11-15 11:40:33.458556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.472464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.797 [2024-11-15 11:40:33.472501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:50.797 [2024-11-15 11:40:33.472516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.888 ms 00:28:50.797 [2024-11-15 11:40:33.472526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.472917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:50.797 [2024-11-15 11:40:33.472931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:50.797 [2024-11-15 11:40:33.472942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:28:50.797 [2024-11-15 11:40:33.472952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.518075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.518280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:50.797 [2024-11-15 11:40:33.518307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.518326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.518364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.518379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:50.797 [2024-11-15 11:40:33.518390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.518399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.518505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.518525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:50.797 [2024-11-15 11:40:33.518537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.518547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.518593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.518607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:50.797 [2024-11-15 11:40:33.518618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.518628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.602244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.602303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:50.797 [2024-11-15 11:40:33.602320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.602337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.670805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.670856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:50.797 [2024-11-15 11:40:33.670872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.670882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.670971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.670989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:50.797 [2024-11-15 11:40:33.671016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.671221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:50.797 [2024-11-15 11:40:33.671233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.671375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:50.797 [2024-11-15 11:40:33.671387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.671479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:50.797 [2024-11-15 11:40:33.671520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.671599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:50.797 [2024-11-15 11:40:33.671610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:50.797 [2024-11-15 11:40:33.671693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:50.797 [2024-11-15 11:40:33.671704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:50.797 [2024-11-15 11:40:33.671714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:50.797 [2024-11-15 11:40:33.671855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8342.256 ms, result 0 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81788 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81788 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81788 ']' 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.084 11:40:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:54.084 [2024-11-15 11:40:36.747935] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:54.084 [2024-11-15 11:40:36.748172] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81788 ] 00:28:54.084 [2024-11-15 11:40:36.914927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.084 [2024-11-15 11:40:37.031764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.021 [2024-11-15 11:40:37.860717] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:55.021 [2024-11-15 11:40:37.860801] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:55.281 [2024-11-15 11:40:38.006810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.281 [2024-11-15 11:40:38.006855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:55.281 [2024-11-15 11:40:38.006874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:55.281 [2024-11-15 11:40:38.006884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.281 [2024-11-15 11:40:38.006951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.281 [2024-11-15 11:40:38.006970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:55.281 [2024-11-15 11:40:38.006981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:55.281 [2024-11-15 11:40:38.006990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.281 [2024-11-15 11:40:38.007020] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:55.281 [2024-11-15 11:40:38.007926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:55.281 [2024-11-15 11:40:38.007966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.281 [2024-11-15 11:40:38.007979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:55.282 [2024-11-15 11:40:38.007990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 00:28:55.282 [2024-11-15 11:40:38.008001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.010040] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:55.282 [2024-11-15 11:40:38.023563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.023605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:55.282 [2024-11-15 11:40:38.023627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.536 ms 00:28:55.282 [2024-11-15 11:40:38.023638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.023702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.023720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:55.282 [2024-11-15 11:40:38.023731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:55.282 [2024-11-15 11:40:38.023740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.032019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.032065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:55.282 [2024-11-15 11:40:38.032080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.211 ms 00:28:55.282 [2024-11-15 11:40:38.032090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.032160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.032178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:55.282 [2024-11-15 11:40:38.032189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:55.282 [2024-11-15 11:40:38.032199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.032271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.032289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:55.282 [2024-11-15 11:40:38.032305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:55.282 [2024-11-15 11:40:38.032315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.032348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:55.282 [2024-11-15 11:40:38.036657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.036838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:55.282 [2024-11-15 11:40:38.036865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.317 ms 00:28:55.282 [2024-11-15 11:40:38.036884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.036923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.036938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:55.282 [2024-11-15 11:40:38.036949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.282 [2024-11-15 11:40:38.036960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.037010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:55.282 [2024-11-15 11:40:38.037060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:55.282 [2024-11-15 11:40:38.037156] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:55.282 [2024-11-15 11:40:38.037177] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:55.282 [2024-11-15 11:40:38.037278] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:55.282 [2024-11-15 11:40:38.037294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:55.282 [2024-11-15 11:40:38.037308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:55.282 [2024-11-15 11:40:38.037321] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037351] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:55.282 [2024-11-15 11:40:38.037361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:55.282 [2024-11-15 11:40:38.037371] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:55.282 [2024-11-15 11:40:38.037381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:55.282 [2024-11-15 11:40:38.037393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.037404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:55.282 [2024-11-15 11:40:38.037415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:28:55.282 [2024-11-15 11:40:38.037439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.037534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.282 [2024-11-15 11:40:38.037546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:55.282 [2024-11-15 11:40:38.037557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:55.282 [2024-11-15 11:40:38.037571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.282 [2024-11-15 11:40:38.037671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:55.282 [2024-11-15 11:40:38.037687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:55.282 [2024-11-15 11:40:38.037697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:55.282 [2024-11-15 11:40:38.037726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:55.282 [2024-11-15 11:40:38.037744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:55.282 [2024-11-15 11:40:38.037754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:55.282 [2024-11-15 11:40:38.037763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:55.282 [2024-11-15 11:40:38.037781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:55.282 [2024-11-15 11:40:38.037790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:55.282 [2024-11-15 11:40:38.037808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:55.282 [2024-11-15 11:40:38.037819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:55.282 [2024-11-15 11:40:38.037837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:55.282 [2024-11-15 11:40:38.037845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.282 [2024-11-15 11:40:38.037855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:55.282 [2024-11-15 11:40:38.037864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:55.282 [2024-11-15 11:40:38.037874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:55.282 [2024-11-15 11:40:38.037891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:55.282 [2024-11-15 11:40:38.037900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:55.282 [2024-11-15 11:40:38.037931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:55.282 [2024-11-15 11:40:38.037939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:55.282 [2024-11-15 11:40:38.037957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:55.282 [2024-11-15 11:40:38.037966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.282 [2024-11-15 11:40:38.037975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:55.283 [2024-11-15 11:40:38.037984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:55.283 [2024-11-15 11:40:38.037992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:55.283 [2024-11-15 11:40:38.038009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:55.283 [2024-11-15 11:40:38.038018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:55.283 [2024-11-15 11:40:38.038035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:55.283 [2024-11-15 11:40:38.038076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:55.283 [2024-11-15 11:40:38.038087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038096] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:55.283 [2024-11-15 11:40:38.038107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:55.283 [2024-11-15 11:40:38.038117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:55.283 [2024-11-15 11:40:38.038127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.283 [2024-11-15 11:40:38.038142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:55.283 [2024-11-15 11:40:38.038152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:55.283 [2024-11-15 11:40:38.038161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:55.283 [2024-11-15 11:40:38.038170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:55.283 [2024-11-15 11:40:38.038179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:55.283 [2024-11-15 11:40:38.038188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:55.283 [2024-11-15 11:40:38.038198] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:55.283 [2024-11-15 11:40:38.038210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:55.283 [2024-11-15 11:40:38.038231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:55.283 [2024-11-15 11:40:38.038259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:55.283 [2024-11-15 11:40:38.038268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:55.283 [2024-11-15 11:40:38.038276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:55.283 [2024-11-15 11:40:38.038286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:55.283 [2024-11-15 11:40:38.038349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:55.283 [2024-11-15 11:40:38.038360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:55.283 [2024-11-15 11:40:38.038381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:55.283 [2024-11-15 11:40:38.038391] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:55.283 [2024-11-15 11:40:38.038400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:55.283 [2024-11-15 11:40:38.038411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.283 [2024-11-15 11:40:38.038421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:55.283 [2024-11-15 11:40:38.038430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.797 ms 00:28:55.283 [2024-11-15 11:40:38.038440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.283 [2024-11-15 11:40:38.038495] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:55.283 [2024-11-15 11:40:38.038511] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:58.573 [2024-11-15 11:40:40.869881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.870262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:58.573 [2024-11-15 11:40:40.870430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2831.396 ms 00:28:58.573 [2024-11-15 11:40:40.870569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.907178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.907464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:58.573 [2024-11-15 11:40:40.907611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.221 ms 00:28:58.573 [2024-11-15 11:40:40.907661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.907822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.907894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:58.573 [2024-11-15 11:40:40.908013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:58.573 [2024-11-15 11:40:40.908097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.948783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.948993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:58.573 [2024-11-15 11:40:40.949225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.413 ms 00:28:58.573 [2024-11-15 11:40:40.949286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.949455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.949549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:58.573 [2024-11-15 11:40:40.949762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:58.573 [2024-11-15 11:40:40.949814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.950497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.950690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:58.573 [2024-11-15 11:40:40.950826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:28:58.573 [2024-11-15 11:40:40.950874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.951067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.951133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:58.573 [2024-11-15 11:40:40.951317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:58.573 [2024-11-15 11:40:40.951366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.970511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.970723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:58.573 [2024-11-15 11:40:40.970835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.081 ms 00:28:58.573 [2024-11-15 11:40:40.970885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:40.995989] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:58.573 [2024-11-15 11:40:40.996254] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:58.573 [2024-11-15 11:40:40.996280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:40.996294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:58.573 [2024-11-15 11:40:40.996307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.168 ms 00:28:58.573 [2024-11-15 11:40:40.996318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.011720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.011764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:58.573 [2024-11-15 11:40:41.011798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.350 ms 00:28:58.573 [2024-11-15 11:40:41.011809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.024818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.024860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:58.573 [2024-11-15 11:40:41.024892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.971 ms 00:28:58.573 [2024-11-15 11:40:41.024902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.038399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.038439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:58.573 [2024-11-15 11:40:41.038470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.452 ms 00:28:58.573 [2024-11-15 11:40:41.038481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.039293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.039334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:58.573 [2024-11-15 11:40:41.039350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:28:58.573 [2024-11-15 11:40:41.039362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.107635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.107706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:58.573 [2024-11-15 11:40:41.107742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.242 ms 00:28:58.573 [2024-11-15 11:40:41.107753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.118750] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:58.573 [2024-11-15 11:40:41.119717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.119752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:58.573 [2024-11-15 11:40:41.119784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.896 ms 00:28:58.573 [2024-11-15 11:40:41.119795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.119906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.119926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:58.573 [2024-11-15 11:40:41.119939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:58.573 [2024-11-15 11:40:41.119950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.120097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.120118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:58.573 [2024-11-15 11:40:41.120131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:28:58.573 [2024-11-15 11:40:41.120143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.120194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.573 [2024-11-15 11:40:41.120215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:58.573 [2024-11-15 11:40:41.120227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:58.573 [2024-11-15 11:40:41.120238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.573 [2024-11-15 11:40:41.120283] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:58.573 [2024-11-15 11:40:41.120301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.120312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:58.574 [2024-11-15 11:40:41.120323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:58.574 [2024-11-15 11:40:41.120334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.146940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.146987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:58.574 [2024-11-15 11:40:41.147019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.579 ms 00:28:58.574 [2024-11-15 11:40:41.147030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.147166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.147201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:58.574 [2024-11-15 11:40:41.147214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:58.574 [2024-11-15 11:40:41.147225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.148845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3141.426 ms, result 0 00:28:58.574 [2024-11-15 11:40:41.163413] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.574 [2024-11-15 11:40:41.179424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:58.574 [2024-11-15 11:40:41.187662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:58.574 [2024-11-15 11:40:41.491711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.491758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:58.574 [2024-11-15 11:40:41.491798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:58.574 [2024-11-15 11:40:41.491809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.491855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.491871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:58.574 [2024-11-15 11:40:41.491882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:58.574 [2024-11-15 11:40:41.491896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.491919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:58.574 [2024-11-15 11:40:41.491931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:58.574 [2024-11-15 11:40:41.491942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:58.574 [2024-11-15 11:40:41.491952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:58.574 [2024-11-15 11:40:41.492015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.294 ms, result 0 00:28:58.574 true 00:28:58.574 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.833 { 00:28:58.833 "name": "ftl", 00:28:58.833 "properties": [ 00:28:58.833 { 00:28:58.833 "name": "superblock_version", 00:28:58.833 "value": 5, 00:28:58.833 "read-only": true 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "name": "base_device", 00:28:58.833 "bands": [ 00:28:58.833 { 00:28:58.833 "id": 0, 00:28:58.833 "state": "CLOSED", 00:28:58.833 "validity": 1.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 1, 00:28:58.833 "state": "CLOSED", 00:28:58.833 "validity": 1.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 2, 00:28:58.833 "state": "CLOSED", 00:28:58.833 "validity": 0.007843137254901933 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 3, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 4, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 5, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 6, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 7, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 8, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 9, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 10, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 11, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 12, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 13, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 14, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 15, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 16, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 17, 00:28:58.833 "state": "FREE", 00:28:58.833 "validity": 0.0 00:28:58.833 } 00:28:58.833 ], 00:28:58.833 "read-only": true 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "name": "cache_device", 00:28:58.833 "type": "bdev", 00:28:58.833 "chunks": [ 00:28:58.833 { 00:28:58.833 "id": 0, 00:28:58.833 "state": "INACTIVE", 00:28:58.833 "utilization": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 1, 00:28:58.833 "state": "OPEN", 00:28:58.833 "utilization": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 2, 00:28:58.833 "state": "OPEN", 00:28:58.833 "utilization": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 3, 00:28:58.833 "state": "FREE", 00:28:58.833 "utilization": 0.0 00:28:58.833 }, 00:28:58.833 { 00:28:58.833 "id": 4, 00:28:58.833 "state": "FREE", 00:28:58.833 "utilization": 0.0 00:28:58.833 } 00:28:58.833 ], 00:28:58.834 "read-only": true 00:28:58.834 }, 00:28:58.834 { 00:28:58.834 "name": "verbose_mode", 00:28:58.834 "value": true, 00:28:58.834 "unit": "", 00:28:58.834 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:58.834 }, 00:28:58.834 { 00:28:58.834 "name": "prep_upgrade_on_shutdown", 00:28:58.834 "value": false, 00:28:58.834 "unit": "", 00:28:58.834 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:58.834 } 00:28:58.834 ] 00:28:58.834 } 00:28:58.834 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:58.834 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:58.834 11:40:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:59.403 Validate MD5 checksum, iteration 1 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:59.403 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:59.404 11:40:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:59.707 [2024-11-15 11:40:42.425712] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:28:59.707 [2024-11-15 11:40:42.426125] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81867 ] 00:28:59.707 [2024-11-15 11:40:42.608409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.993 [2024-11-15 11:40:42.711352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.373  [2024-11-15T11:40:45.698Z] Copying: 493/1024 [MB] (493 MBps) [2024-11-15T11:40:45.698Z] Copying: 975/1024 [MB] (482 MBps) [2024-11-15T11:40:47.075Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:29:04.126 00:29:04.126 11:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:04.126 11:40:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:05.508 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b759a46a741a4f80fae9e53640c530b6 00:29:05.768 Validate MD5 checksum, iteration 2 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b759a46a741a4f80fae9e53640c530b6 != \b\7\5\9\a\4\6\a\7\4\1\a\4\f\8\0\f\a\e\9\e\5\3\6\4\0\c\5\3\0\b\6 ]] 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:05.768 11:40:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:05.768 [2024-11-15 11:40:48.562711] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:29:05.768 [2024-11-15 11:40:48.562891] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81930 ] 00:29:06.028 [2024-11-15 11:40:48.748548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.028 [2024-11-15 11:40:48.883879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.937  [2024-11-15T11:40:51.455Z] Copying: 487/1024 [MB] (487 MBps) [2024-11-15T11:40:51.714Z] Copying: 958/1024 [MB] (471 MBps) [2024-11-15T11:40:53.618Z] Copying: 1024/1024 [MB] (average 479 MBps) 00:29:10.669 00:29:10.669 11:40:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:10.669 11:40:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0117f52417497a97051b120d08eec57a 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0117f52417497a97051b120d08eec57a != \0\1\1\7\f\5\2\4\1\7\4\9\7\a\9\7\0\5\1\b\1\2\0\d\0\8\e\e\c\5\7\a ]] 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81788 ]] 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81788 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82003 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82003 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 82003 ']' 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:12.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:12.572 11:40:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:12.572 [2024-11-15 11:40:55.371996] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:29:12.572 [2024-11-15 11:40:55.372195] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82003 ] 00:29:12.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81788 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:12.830 [2024-11-15 11:40:55.553329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.830 [2024-11-15 11:40:55.658690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.766 [2024-11-15 11:40:56.480342] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:13.766 [2024-11-15 11:40:56.480428] bdev.c:8672:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:13.766 [2024-11-15 11:40:56.626577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.626622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:13.766 [2024-11-15 11:40:56.626641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:13.766 [2024-11-15 11:40:56.626652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.626719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.626737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:13.766 [2024-11-15 11:40:56.626748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:13.766 [2024-11-15 11:40:56.626757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.626786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:13.766 [2024-11-15 11:40:56.627616] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:13.766 [2024-11-15 11:40:56.627649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.627661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:13.766 [2024-11-15 11:40:56.627673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.870 ms 00:29:13.766 [2024-11-15 11:40:56.627682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.628152] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:13.766 [2024-11-15 11:40:56.645936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.645979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:13.766 [2024-11-15 11:40:56.645995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.785 ms 00:29:13.766 [2024-11-15 11:40:56.646005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.655509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.655547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:13.766 [2024-11-15 11:40:56.655567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:13.766 [2024-11-15 11:40:56.655576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.655962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.655979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:13.766 [2024-11-15 11:40:56.655990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:29:13.766 [2024-11-15 11:40:56.655999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.656107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.656127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:13.766 [2024-11-15 11:40:56.656138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.086 ms 00:29:13.766 [2024-11-15 11:40:56.656147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.656179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.656193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:13.766 [2024-11-15 11:40:56.656203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:13.766 [2024-11-15 11:40:56.656213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.766 [2024-11-15 11:40:56.656240] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:13.766 [2024-11-15 11:40:56.659513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.766 [2024-11-15 11:40:56.659676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:13.767 [2024-11-15 11:40:56.659701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.279 ms 00:29:13.767 [2024-11-15 11:40:56.659712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.767 [2024-11-15 11:40:56.659753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.767 [2024-11-15 11:40:56.659767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:13.767 [2024-11-15 11:40:56.659777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:13.767 [2024-11-15 11:40:56.659786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.767 [2024-11-15 11:40:56.659832] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:13.767 [2024-11-15 11:40:56.659859] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:13.767 [2024-11-15 11:40:56.659894] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:13.767 [2024-11-15 11:40:56.659915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:13.767 [2024-11-15 11:40:56.660022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:13.767 [2024-11-15 11:40:56.660036] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:13.767 [2024-11-15 11:40:56.660064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:13.767 [2024-11-15 11:40:56.660078] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660090] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:13.767 [2024-11-15 11:40:56.660109] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:13.767 [2024-11-15 11:40:56.660118] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:13.767 [2024-11-15 11:40:56.660126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:13.767 [2024-11-15 11:40:56.660137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.767 [2024-11-15 11:40:56.660152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:13.767 [2024-11-15 11:40:56.660162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.307 ms 00:29:13.767 [2024-11-15 11:40:56.660171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.767 [2024-11-15 11:40:56.660263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.767 [2024-11-15 11:40:56.660275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:13.767 [2024-11-15 11:40:56.660285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:29:13.767 [2024-11-15 11:40:56.660294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.767 [2024-11-15 11:40:56.660382] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:13.767 [2024-11-15 11:40:56.660396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:13.767 [2024-11-15 11:40:56.660411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:13.767 [2024-11-15 11:40:56.660438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:13.767 [2024-11-15 11:40:56.660456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:13.767 [2024-11-15 11:40:56.660465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:13.767 [2024-11-15 11:40:56.660475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:13.767 [2024-11-15 11:40:56.660493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:13.767 [2024-11-15 11:40:56.660501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:13.767 [2024-11-15 11:40:56.660518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:13.767 [2024-11-15 11:40:56.660526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:13.767 [2024-11-15 11:40:56.660543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:13.767 [2024-11-15 11:40:56.660551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:13.767 [2024-11-15 11:40:56.660569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:13.767 [2024-11-15 11:40:56.660606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:13.767 [2024-11-15 11:40:56.660631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:13.767 [2024-11-15 11:40:56.660656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:13.767 [2024-11-15 11:40:56.660680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:13.767 [2024-11-15 11:40:56.660705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:13.767 [2024-11-15 11:40:56.660730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:13.767 [2024-11-15 11:40:56.660758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:13.767 [2024-11-15 11:40:56.660766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660775] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:13.767 [2024-11-15 11:40:56.660785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:13.767 [2024-11-15 11:40:56.660795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:13.767 [2024-11-15 11:40:56.660813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:13.767 [2024-11-15 11:40:56.660822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:13.767 [2024-11-15 11:40:56.660830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:13.767 [2024-11-15 11:40:56.660838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:13.767 [2024-11-15 11:40:56.660846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:13.767 [2024-11-15 11:40:56.660855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:13.767 [2024-11-15 11:40:56.660865] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:13.767 [2024-11-15 11:40:56.660876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:13.767 [2024-11-15 11:40:56.660895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:13.767 [2024-11-15 11:40:56.660922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:13.767 [2024-11-15 11:40:56.660932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:13.767 [2024-11-15 11:40:56.660940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:13.767 [2024-11-15 11:40:56.660949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.660992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.661001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:13.767 [2024-11-15 11:40:56.661010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:13.767 [2024-11-15 11:40:56.661020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.661035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:13.767 [2024-11-15 11:40:56.661059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:13.768 [2024-11-15 11:40:56.661098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:13.768 [2024-11-15 11:40:56.661119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:13.768 [2024-11-15 11:40:56.661130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.768 [2024-11-15 11:40:56.661140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:13.768 [2024-11-15 11:40:56.661150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.801 ms 00:29:13.768 [2024-11-15 11:40:56.661161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.768 [2024-11-15 11:40:56.698363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.768 [2024-11-15 11:40:56.698481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:13.768 [2024-11-15 11:40:56.698517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.137 ms 00:29:13.768 [2024-11-15 11:40:56.698529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:13.768 [2024-11-15 11:40:56.698599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:13.768 [2024-11-15 11:40:56.698614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:13.768 [2024-11-15 11:40:56.698629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:13.768 [2024-11-15 11:40:56.698639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.026 [2024-11-15 11:40:56.740770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.026 [2024-11-15 11:40:56.740823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:14.026 [2024-11-15 11:40:56.740856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.042 ms 00:29:14.027 [2024-11-15 11:40:56.740867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.740931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.740946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:14.027 [2024-11-15 11:40:56.740959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:14.027 [2024-11-15 11:40:56.740969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.741224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.741244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:14.027 [2024-11-15 11:40:56.741257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:29:14.027 [2024-11-15 11:40:56.741269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.741330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.741346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:14.027 [2024-11-15 11:40:56.741358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:14.027 [2024-11-15 11:40:56.741368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.761554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.761763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:14.027 [2024-11-15 11:40:56.761791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.151 ms 00:29:14.027 [2024-11-15 11:40:56.761813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.762019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.762108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:14.027 [2024-11-15 11:40:56.762125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:29:14.027 [2024-11-15 11:40:56.762137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.797576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.797777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:14.027 [2024-11-15 11:40:56.797807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.389 ms 00:29:14.027 [2024-11-15 11:40:56.797821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.808728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.808769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:14.027 [2024-11-15 11:40:56.808795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.766 ms 00:29:14.027 [2024-11-15 11:40:56.808807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.875609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.875676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:14.027 [2024-11-15 11:40:56.875719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.727 ms 00:29:14.027 [2024-11-15 11:40:56.875731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.875938] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:14.027 [2024-11-15 11:40:56.876106] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:14.027 [2024-11-15 11:40:56.876268] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:14.027 [2024-11-15 11:40:56.876384] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:14.027 [2024-11-15 11:40:56.876414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.876441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:14.027 [2024-11-15 11:40:56.876468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.617 ms 00:29:14.027 [2024-11-15 11:40:56.876479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.876598] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:14.027 [2024-11-15 11:40:56.876619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.876635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:14.027 [2024-11-15 11:40:56.876648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:14.027 [2024-11-15 11:40:56.876658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.893731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.893777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:14.027 [2024-11-15 11:40:56.893810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.040 ms 00:29:14.027 [2024-11-15 11:40:56.893820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.903952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.903992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:14.027 [2024-11-15 11:40:56.904023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:14.027 [2024-11-15 11:40:56.904034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.027 [2024-11-15 11:40:56.904198] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:14.027 [2024-11-15 11:40:56.904485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.027 [2024-11-15 11:40:56.904506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:14.027 [2024-11-15 11:40:56.904520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:29:14.027 [2024-11-15 11:40:56.904531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.594 [2024-11-15 11:40:57.508459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.594 [2024-11-15 11:40:57.508601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:14.594 [2024-11-15 11:40:57.508621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 602.834 ms 00:29:14.594 [2024-11-15 11:40:57.508632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.594 [2024-11-15 11:40:57.513188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.594 [2024-11-15 11:40:57.513242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:14.594 [2024-11-15 11:40:57.513260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.176 ms 00:29:14.594 [2024-11-15 11:40:57.513286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.594 [2024-11-15 11:40:57.513843] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:14.594 [2024-11-15 11:40:57.513878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.594 [2024-11-15 11:40:57.513892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:14.594 [2024-11-15 11:40:57.513905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:29:14.595 [2024-11-15 11:40:57.513916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.595 [2024-11-15 11:40:57.514021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.595 [2024-11-15 11:40:57.514054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:14.595 [2024-11-15 11:40:57.514068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:14.595 [2024-11-15 11:40:57.514093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:14.595 [2024-11-15 11:40:57.514164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 609.966 ms, result 0 00:29:14.595 [2024-11-15 11:40:57.514210] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:14.595 [2024-11-15 11:40:57.514290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:14.595 [2024-11-15 11:40:57.514301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:14.595 [2024-11-15 11:40:57.514312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:29:14.595 [2024-11-15 11:40:57.514336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.162 [2024-11-15 11:40:58.107577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.162 [2024-11-15 11:40:58.107645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:15.162 [2024-11-15 11:40:58.107667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 592.212 ms 00:29:15.162 [2024-11-15 11:40:58.107695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.421 [2024-11-15 11:40:58.112357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.421 [2024-11-15 11:40:58.112403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:15.421 [2024-11-15 11:40:58.112436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.084 ms 00:29:15.421 [2024-11-15 11:40:58.112447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.421 [2024-11-15 11:40:58.112958] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:15.421 [2024-11-15 11:40:58.112991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.421 [2024-11-15 11:40:58.113005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:15.421 [2024-11-15 11:40:58.113017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:29:15.421 [2024-11-15 11:40:58.113041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.421 [2024-11-15 11:40:58.113214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.421 [2024-11-15 11:40:58.113241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:15.421 [2024-11-15 11:40:58.113254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:15.421 [2024-11-15 11:40:58.113264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.421 [2024-11-15 11:40:58.113314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 599.099 ms, result 0 00:29:15.421 [2024-11-15 11:40:58.113395] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:15.421 [2024-11-15 11:40:58.113441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:15.421 [2024-11-15 11:40:58.113454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.421 [2024-11-15 11:40:58.113464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:15.421 [2024-11-15 11:40:58.113475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1209.285 ms 00:29:15.421 [2024-11-15 11:40:58.113487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.421 [2024-11-15 11:40:58.113538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.421 [2024-11-15 11:40:58.113554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:15.421 [2024-11-15 11:40:58.113571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:15.422 [2024-11-15 11:40:58.113581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.126365] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:15.422 [2024-11-15 11:40:58.126565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.126585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:15.422 [2024-11-15 11:40:58.126598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.964 ms 00:29:15.422 [2024-11-15 11:40:58.126609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.127407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.127435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:15.422 [2024-11-15 11:40:58.127468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:29:15.422 [2024-11-15 11:40:58.127479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.129904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:15.422 [2024-11-15 11:40:58.130152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.400 ms 00:29:15.422 [2024-11-15 11:40:58.130165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.130258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:15.422 [2024-11-15 11:40:58.130296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:15.422 [2024-11-15 11:40:58.130314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.130476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:15.422 [2024-11-15 11:40:58.130505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:15.422 [2024-11-15 11:40:58.130515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.130547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:15.422 [2024-11-15 11:40:58.130572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:15.422 [2024-11-15 11:40:58.130583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.130635] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:15.422 [2024-11-15 11:40:58.130650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:15.422 [2024-11-15 11:40:58.130671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:15.422 [2024-11-15 11:40:58.130682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.130748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.422 [2024-11-15 11:40:58.130763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:15.422 [2024-11-15 11:40:58.130774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:15.422 [2024-11-15 11:40:58.130784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.422 [2024-11-15 11:40:58.132014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1504.894 ms, result 0 00:29:15.422 [2024-11-15 11:40:58.146952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.422 [2024-11-15 11:40:58.162947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:15.422 [2024-11-15 11:40:58.172390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:15.422 Validate MD5 checksum, iteration 1 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:15.422 11:40:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:15.422 [2024-11-15 11:40:58.286208] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:29:15.422 [2024-11-15 11:40:58.286511] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82042 ] 00:29:15.680 [2024-11-15 11:40:58.449106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.680 [2024-11-15 11:40:58.550561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.583  [2024-11-15T11:41:01.468Z] Copying: 488/1024 [MB] (488 MBps) [2024-11-15T11:41:01.468Z] Copying: 969/1024 [MB] (481 MBps) [2024-11-15T11:41:02.845Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:29:19.896 00:29:19.896 11:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:19.896 11:41:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b759a46a741a4f80fae9e53640c530b6 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b759a46a741a4f80fae9e53640c530b6 != \b\7\5\9\a\4\6\a\7\4\1\a\4\f\8\0\f\a\e\9\e\5\3\6\4\0\c\5\3\0\b\6 ]] 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:21.799 Validate MD5 checksum, iteration 2 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:21.799 11:41:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:21.799 [2024-11-15 11:41:04.403782] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:29:21.799 [2024-11-15 11:41:04.404187] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82105 ] 00:29:21.799 [2024-11-15 11:41:04.575086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.799 [2024-11-15 11:41:04.716759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.703  [2024-11-15T11:41:07.590Z] Copying: 490/1024 [MB] (490 MBps) [2024-11-15T11:41:07.590Z] Copying: 973/1024 [MB] (483 MBps) [2024-11-15T11:41:08.526Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:29:25.577 00:29:25.577 11:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:25.577 11:41:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0117f52417497a97051b120d08eec57a 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0117f52417497a97051b120d08eec57a != \0\1\1\7\f\5\2\4\1\7\4\9\7\a\9\7\0\5\1\b\1\2\0\d\0\8\e\e\c\5\7\a ]] 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82003 ]] 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82003 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 82003 ']' 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 82003 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82003 00:29:27.481 killing process with pid 82003 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82003' 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 82003 00:29:27.481 11:41:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 82003 00:29:28.418 [2024-11-15 11:41:11.169700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:28.418 [2024-11-15 11:41:11.185528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.185570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:28.418 [2024-11-15 11:41:11.185590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:28.418 [2024-11-15 11:41:11.185601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.185627] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:28.418 [2024-11-15 11:41:11.188697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.188859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:28.418 [2024-11-15 11:41:11.188891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.052 ms 00:29:28.418 [2024-11-15 11:41:11.188903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.189201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.189222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:28.418 [2024-11-15 11:41:11.189234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:29:28.418 [2024-11-15 11:41:11.189244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.190425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.190457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:28.418 [2024-11-15 11:41:11.190471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.162 ms 00:29:28.418 [2024-11-15 11:41:11.190481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.191571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.191603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:28.418 [2024-11-15 11:41:11.191618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.029 ms 00:29:28.418 [2024-11-15 11:41:11.191627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.201496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.201533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:28.418 [2024-11-15 11:41:11.201546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.811 ms 00:29:28.418 [2024-11-15 11:41:11.201562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.207051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.207086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:28.418 [2024-11-15 11:41:11.207099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.451 ms 00:29:28.418 [2024-11-15 11:41:11.207110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.207180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.207197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:28.418 [2024-11-15 11:41:11.207207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:28.418 [2024-11-15 11:41:11.207217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.217301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.217350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:28.418 [2024-11-15 11:41:11.217364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.059 ms 00:29:28.418 [2024-11-15 11:41:11.217389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.227256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.227288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:28.418 [2024-11-15 11:41:11.227300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.818 ms 00:29:28.418 [2024-11-15 11:41:11.227308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.237074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.237121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:28.418 [2024-11-15 11:41:11.237134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.717 ms 00:29:28.418 [2024-11-15 11:41:11.237144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.248876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.418 [2024-11-15 11:41:11.248927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:28.418 [2024-11-15 11:41:11.248942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.655 ms 00:29:28.418 [2024-11-15 11:41:11.248952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.418 [2024-11-15 11:41:11.248992] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:28.418 [2024-11-15 11:41:11.249014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:28.418 [2024-11-15 11:41:11.249027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:28.418 [2024-11-15 11:41:11.249054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:28.418 [2024-11-15 11:41:11.249094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:28.418 [2024-11-15 11:41:11.249109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:28.418 [2024-11-15 11:41:11.249121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:28.419 [2024-11-15 11:41:11.249309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:28.419 [2024-11-15 11:41:11.249319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 01844c90-77c9-4ba7-a387-4a9294e659ec 00:29:28.419 [2024-11-15 11:41:11.249332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:28.419 [2024-11-15 11:41:11.249342] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:28.419 [2024-11-15 11:41:11.249351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:28.419 [2024-11-15 11:41:11.249376] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:28.419 [2024-11-15 11:41:11.249401] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:28.419 [2024-11-15 11:41:11.249410] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:28.419 [2024-11-15 11:41:11.249435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:28.419 [2024-11-15 11:41:11.249443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:28.419 [2024-11-15 11:41:11.249452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:28.419 [2024-11-15 11:41:11.249463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.419 [2024-11-15 11:41:11.249479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:28.419 [2024-11-15 11:41:11.249490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:29:28.419 [2024-11-15 11:41:11.249500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.264889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.419 [2024-11-15 11:41:11.264941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:28.419 [2024-11-15 11:41:11.264957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.352 ms 00:29:28.419 [2024-11-15 11:41:11.264968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.265469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:28.419 [2024-11-15 11:41:11.265495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:28.419 [2024-11-15 11:41:11.265508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.475 ms 00:29:28.419 [2024-11-15 11:41:11.265518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.313393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.419 [2024-11-15 11:41:11.313503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:28.419 [2024-11-15 11:41:11.313535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.419 [2024-11-15 11:41:11.313547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.313598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.419 [2024-11-15 11:41:11.313612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:28.419 [2024-11-15 11:41:11.313627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.419 [2024-11-15 11:41:11.313638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.313792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.419 [2024-11-15 11:41:11.313812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:28.419 [2024-11-15 11:41:11.313825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.419 [2024-11-15 11:41:11.313836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.419 [2024-11-15 11:41:11.313860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.419 [2024-11-15 11:41:11.313882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:28.419 [2024-11-15 11:41:11.313894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.419 [2024-11-15 11:41:11.313904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.403183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.403273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:28.693 [2024-11-15 11:41:11.403307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.403319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.480553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.480616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:28.693 [2024-11-15 11:41:11.480648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.480658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.480751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.480768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:28.693 [2024-11-15 11:41:11.480780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.480790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.480895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.480912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:28.693 [2024-11-15 11:41:11.480931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.480954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.481164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.481194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:28.693 [2024-11-15 11:41:11.481209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.481221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.481278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.481295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:28.693 [2024-11-15 11:41:11.481308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.481326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.481373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.481388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:28.693 [2024-11-15 11:41:11.481415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.481425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.481505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:28.693 [2024-11-15 11:41:11.481522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:28.693 [2024-11-15 11:41:11.481539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:28.693 [2024-11-15 11:41:11.481549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:28.693 [2024-11-15 11:41:11.481690] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 296.122 ms, result 0 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:29.644 Remove shared memory files 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81788 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:29.644 ************************************ 00:29:29.644 END TEST ftl_upgrade_shutdown 00:29:29.644 ************************************ 00:29:29.644 00:29:29.644 real 1m26.554s 00:29:29.644 user 2m1.958s 00:29:29.644 sys 0m23.056s 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:29.644 11:41:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@14 -- # killprocess 74015 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@952 -- # '[' -z 74015 ']' 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@956 -- # kill -0 74015 00:29:29.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74015) - No such process 00:29:29.644 Process with pid 74015 is not found 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 74015 is not found' 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82219 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:29.644 11:41:12 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82219 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@833 -- # '[' -z 82219 ']' 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.644 11:41:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:29.903 [2024-11-15 11:41:12.631480] Starting SPDK v25.01-pre git sha1 514198259 / DPDK 24.03.0 initialization... 00:29:29.903 [2024-11-15 11:41:12.631660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82219 ] 00:29:29.903 [2024-11-15 11:41:12.811901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.162 [2024-11-15 11:41:12.916005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.730 11:41:13 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.730 11:41:13 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:30.730 11:41:13 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:31.299 nvme0n1 00:29:31.299 11:41:13 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:31.299 11:41:13 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:31.299 11:41:13 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:31.299 11:41:14 ftl -- ftl/common.sh@28 -- # stores=b6658e98-7c35-40b2-b296-b16c47193cb6 00:29:31.299 11:41:14 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:31.299 11:41:14 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6658e98-7c35-40b2-b296-b16c47193cb6 00:29:31.558 11:41:14 ftl -- ftl/ftl.sh@23 -- # killprocess 82219 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@952 -- # '[' -z 82219 ']' 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@956 -- # kill -0 82219 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@957 -- # uname 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82219 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:31.558 killing process with pid 82219 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82219' 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@971 -- # kill 82219 00:29:31.558 11:41:14 ftl -- common/autotest_common.sh@976 -- # wait 82219 00:29:33.463 11:41:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:33.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:33.722 Waiting for block devices as requested 00:29:33.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.981 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:33.981 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:39.253 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:39.253 11:41:21 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:39.253 Remove shared memory files 00:29:39.253 11:41:21 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:39.253 11:41:21 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:39.253 11:41:21 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:39.253 11:41:21 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:39.253 11:41:21 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:39.253 11:41:21 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:39.253 ************************************ 00:29:39.253 END TEST ftl 00:29:39.253 ************************************ 00:29:39.253 00:29:39.253 real 12m20.052s 00:29:39.253 user 15m13.325s 00:29:39.253 sys 1m31.394s 00:29:39.253 11:41:21 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:39.253 11:41:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:39.253 11:41:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:39.253 11:41:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:39.253 11:41:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:39.253 11:41:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:39.253 11:41:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:39.253 11:41:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:39.253 11:41:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:39.253 11:41:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:39.253 11:41:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:39.253 11:41:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:39.253 11:41:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.253 11:41:21 -- common/autotest_common.sh@10 -- # set +x 00:29:39.253 11:41:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:39.253 11:41:21 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:39.253 11:41:21 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:39.253 11:41:21 -- common/autotest_common.sh@10 -- # set +x 00:29:41.158 INFO: APP EXITING 00:29:41.158 INFO: killing all VMs 00:29:41.158 INFO: killing vhost app 00:29:41.158 INFO: EXIT DONE 00:29:41.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:41.725 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:41.725 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:41.725 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:41.725 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:41.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:42.555 Cleaning 00:29:42.555 Removing: /var/run/dpdk/spdk0/config 00:29:42.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:42.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:42.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:42.555 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:42.555 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:42.555 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:42.555 Removing: /var/run/dpdk/spdk0 00:29:42.555 Removing: /var/run/dpdk/spdk_pid57698 00:29:42.555 Removing: /var/run/dpdk/spdk_pid57933 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58162 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58266 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58317 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58456 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58474 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58684 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58789 00:29:42.555 Removing: /var/run/dpdk/spdk_pid58891 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59007 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59115 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59155 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59191 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59267 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59379 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59859 00:29:42.555 Removing: /var/run/dpdk/spdk_pid59934 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60010 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60026 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60183 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60204 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60358 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60376 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60450 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60469 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60533 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60551 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60752 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60788 00:29:42.555 Removing: /var/run/dpdk/spdk_pid60877 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61066 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61161 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61203 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61692 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61796 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61905 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61958 00:29:42.555 Removing: /var/run/dpdk/spdk_pid61989 00:29:42.555 Removing: /var/run/dpdk/spdk_pid62073 00:29:42.555 Removing: /var/run/dpdk/spdk_pid62710 00:29:42.555 Removing: /var/run/dpdk/spdk_pid62752 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63268 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63370 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63486 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63539 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63570 00:29:42.555 Removing: /var/run/dpdk/spdk_pid63596 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65481 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65618 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65633 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65645 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65684 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65688 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65700 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65750 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65754 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65766 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65811 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65815 00:29:42.555 Removing: /var/run/dpdk/spdk_pid65827 00:29:42.555 Removing: /var/run/dpdk/spdk_pid67215 00:29:42.555 Removing: /var/run/dpdk/spdk_pid67328 00:29:42.555 Removing: /var/run/dpdk/spdk_pid68752 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70104 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70220 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70335 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70451 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70589 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70669 00:29:42.555 Removing: /var/run/dpdk/spdk_pid70817 00:29:42.555 Removing: /var/run/dpdk/spdk_pid71183 00:29:42.555 Removing: /var/run/dpdk/spdk_pid71225 00:29:42.555 Removing: /var/run/dpdk/spdk_pid71710 00:29:42.555 Removing: /var/run/dpdk/spdk_pid71890 00:29:42.555 Removing: /var/run/dpdk/spdk_pid71990 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72117 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72160 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72191 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72482 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72547 00:29:42.555 Removing: /var/run/dpdk/spdk_pid72639 00:29:42.555 Removing: /var/run/dpdk/spdk_pid73060 00:29:42.555 Removing: /var/run/dpdk/spdk_pid73206 00:29:42.555 Removing: /var/run/dpdk/spdk_pid74015 00:29:42.814 Removing: /var/run/dpdk/spdk_pid74158 00:29:42.814 Removing: /var/run/dpdk/spdk_pid74357 00:29:42.814 Removing: /var/run/dpdk/spdk_pid74470 00:29:42.814 Removing: /var/run/dpdk/spdk_pid74829 00:29:42.814 Removing: /var/run/dpdk/spdk_pid75110 00:29:42.814 Removing: /var/run/dpdk/spdk_pid75471 00:29:42.814 Removing: /var/run/dpdk/spdk_pid75670 00:29:42.814 Removing: /var/run/dpdk/spdk_pid75817 00:29:42.814 Removing: /var/run/dpdk/spdk_pid75881 00:29:42.814 Removing: /var/run/dpdk/spdk_pid76041 00:29:42.814 Removing: /var/run/dpdk/spdk_pid76074 00:29:42.814 Removing: /var/run/dpdk/spdk_pid76132 00:29:42.814 Removing: /var/run/dpdk/spdk_pid76361 00:29:42.814 Removing: /var/run/dpdk/spdk_pid76603 00:29:42.814 Removing: /var/run/dpdk/spdk_pid77062 00:29:42.814 Removing: /var/run/dpdk/spdk_pid77557 00:29:42.814 Removing: /var/run/dpdk/spdk_pid78050 00:29:42.814 Removing: /var/run/dpdk/spdk_pid78617 00:29:42.814 Removing: /var/run/dpdk/spdk_pid78759 00:29:42.814 Removing: /var/run/dpdk/spdk_pid78853 00:29:42.814 Removing: /var/run/dpdk/spdk_pid79551 00:29:42.814 Removing: /var/run/dpdk/spdk_pid79618 00:29:42.814 Removing: /var/run/dpdk/spdk_pid80095 00:29:42.814 Removing: /var/run/dpdk/spdk_pid80575 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81229 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81352 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81399 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81469 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81525 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81595 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81788 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81867 00:29:42.814 Removing: /var/run/dpdk/spdk_pid81930 00:29:42.814 Removing: /var/run/dpdk/spdk_pid82003 00:29:42.814 Removing: /var/run/dpdk/spdk_pid82042 00:29:42.814 Removing: /var/run/dpdk/spdk_pid82105 00:29:42.814 Removing: /var/run/dpdk/spdk_pid82219 00:29:42.814 Clean 00:29:42.814 11:41:25 -- common/autotest_common.sh@1451 -- # return 0 00:29:42.814 11:41:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:42.814 11:41:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.814 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:29:42.814 11:41:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:42.814 11:41:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.814 11:41:25 -- common/autotest_common.sh@10 -- # set +x 00:29:42.814 11:41:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:42.814 11:41:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:42.814 11:41:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:42.814 11:41:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:42.814 11:41:25 -- spdk/autotest.sh@394 -- # hostname 00:29:42.814 11:41:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:43.073 geninfo: WARNING: invalid characters removed from testname! 00:30:09.622 11:41:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:09.622 11:41:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:11.524 11:41:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:14.057 11:41:56 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:15.959 11:41:58 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:18.491 11:42:01 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:21.023 11:42:03 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:21.023 11:42:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:21.023 11:42:03 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:21.023 11:42:03 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:21.023 11:42:03 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:21.023 11:42:03 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:21.023 + [[ -n 5408 ]] 00:30:21.023 + sudo kill 5408 00:30:21.031 [Pipeline] } 00:30:21.047 [Pipeline] // timeout 00:30:21.053 [Pipeline] } 00:30:21.067 [Pipeline] // stage 00:30:21.073 [Pipeline] } 00:30:21.088 [Pipeline] // catchError 00:30:21.098 [Pipeline] stage 00:30:21.101 [Pipeline] { (Stop VM) 00:30:21.113 [Pipeline] sh 00:30:21.406 + vagrant halt 00:30:24.700 ==> default: Halting domain... 00:30:31.272 [Pipeline] sh 00:30:31.552 + vagrant destroy -f 00:30:34.838 ==> default: Removing domain... 00:30:35.108 [Pipeline] sh 00:30:35.387 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:35.396 [Pipeline] } 00:30:35.411 [Pipeline] // stage 00:30:35.417 [Pipeline] } 00:30:35.431 [Pipeline] // dir 00:30:35.436 [Pipeline] } 00:30:35.450 [Pipeline] // wrap 00:30:35.456 [Pipeline] } 00:30:35.469 [Pipeline] // catchError 00:30:35.478 [Pipeline] stage 00:30:35.480 [Pipeline] { (Epilogue) 00:30:35.493 [Pipeline] sh 00:30:35.775 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:41.061 [Pipeline] catchError 00:30:41.063 [Pipeline] { 00:30:41.076 [Pipeline] sh 00:30:41.357 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:41.357 Artifacts sizes are good 00:30:41.366 [Pipeline] } 00:30:41.380 [Pipeline] // catchError 00:30:41.391 [Pipeline] archiveArtifacts 00:30:41.399 Archiving artifacts 00:30:41.509 [Pipeline] cleanWs 00:30:41.589 [WS-CLEANUP] Deleting project workspace... 00:30:41.589 [WS-CLEANUP] Deferred wipeout is used... 00:30:41.594 [WS-CLEANUP] done 00:30:41.596 [Pipeline] } 00:30:41.605 [Pipeline] // stage 00:30:41.607 [Pipeline] } 00:30:41.615 [Pipeline] // node 00:30:41.618 [Pipeline] End of Pipeline 00:30:41.639 Finished: SUCCESS