00:00:00.001 Started by upstream project "autotest-per-patch" build number 132334 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.106 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.200 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.286 Using shallow fetch with depth 1 00:00:00.286 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.286 > git --version # timeout=10 00:00:00.357 > git --version # 'git version 2.39.2' 00:00:00.357 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.408 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.408 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.314 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.329 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.343 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.343 > git config core.sparsecheckout # timeout=10 00:00:07.357 > git read-tree -mu HEAD # timeout=10 00:00:07.379 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.402 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.402 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.500 [Pipeline] Start of Pipeline 00:00:07.514 [Pipeline] library 00:00:07.516 Loading library shm_lib@master 00:00:07.516 Library shm_lib@master is cached. Copying from home. 00:00:07.536 [Pipeline] node 00:00:22.544 Still waiting to schedule task 00:00:22.545 Waiting for next available executor on ‘vagrant-vm-host’ 00:21:26.847 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest_2 00:21:26.848 [Pipeline] { 00:21:26.861 [Pipeline] catchError 00:21:26.863 [Pipeline] { 00:21:26.876 [Pipeline] wrap 00:21:26.885 [Pipeline] { 00:21:26.893 [Pipeline] stage 00:21:26.894 [Pipeline] { (Prologue) 00:21:26.911 [Pipeline] echo 00:21:26.913 Node: VM-host-WFP7 00:21:26.919 [Pipeline] cleanWs 00:21:26.927 [WS-CLEANUP] Deleting project workspace... 00:21:26.927 [WS-CLEANUP] Deferred wipeout is used... 00:21:26.933 [WS-CLEANUP] done 00:21:27.149 [Pipeline] setCustomBuildProperty 00:21:27.245 [Pipeline] httpRequest 00:21:27.553 [Pipeline] echo 00:21:27.555 Sorcerer 10.211.164.20 is alive 00:21:27.568 [Pipeline] retry 00:21:27.570 [Pipeline] { 00:21:27.585 [Pipeline] httpRequest 00:21:27.590 HttpMethod: GET 00:21:27.591 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:27.592 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:27.592 Response Code: HTTP/1.1 200 OK 00:21:27.593 Success: Status code 200 is in the accepted range: 200,404 00:21:27.593 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:27.739 [Pipeline] } 00:21:27.757 [Pipeline] // retry 00:21:27.766 [Pipeline] sh 00:21:28.049 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:21:28.065 [Pipeline] httpRequest 00:21:28.370 [Pipeline] echo 00:21:28.372 Sorcerer 10.211.164.20 is alive 00:21:28.383 [Pipeline] retry 00:21:28.386 [Pipeline] { 00:21:28.401 [Pipeline] httpRequest 00:21:28.406 HttpMethod: GET 00:21:28.407 URL: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:21:28.408 Sending request to url: http://10.211.164.20/packages/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:21:28.408 Response Code: HTTP/1.1 200 OK 00:21:28.409 Success: Status code 200 is in the accepted range: 200,404 00:21:28.409 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:21:30.685 [Pipeline] } 00:21:30.704 [Pipeline] // retry 00:21:30.712 [Pipeline] sh 00:21:30.996 + tar --no-same-owner -xf spdk_57b682926e45ec151052477d80f65bc81bd1ab2b.tar.gz 00:21:34.302 [Pipeline] sh 00:21:34.586 + git -C spdk log --oneline -n5 00:21:34.586 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:21:34.586 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:21:34.586 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:21:34.586 95f6a056e bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:21:34.586 a38267915 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:21:34.606 [Pipeline] writeFile 00:21:34.624 [Pipeline] sh 00:21:34.911 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:21:34.923 [Pipeline] sh 00:21:35.261 + cat autorun-spdk.conf 00:21:35.261 SPDK_RUN_FUNCTIONAL_TEST=1 00:21:35.261 SPDK_TEST_NVME=1 00:21:35.261 SPDK_TEST_FTL=1 00:21:35.261 SPDK_TEST_ISAL=1 00:21:35.261 SPDK_RUN_ASAN=1 00:21:35.261 SPDK_RUN_UBSAN=1 00:21:35.261 SPDK_TEST_XNVME=1 00:21:35.261 SPDK_TEST_NVME_FDP=1 00:21:35.261 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:35.269 RUN_NIGHTLY=0 00:21:35.271 [Pipeline] } 00:21:35.286 [Pipeline] // stage 00:21:35.305 [Pipeline] stage 00:21:35.308 [Pipeline] { (Run VM) 00:21:35.322 [Pipeline] sh 00:21:35.606 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:21:35.607 + echo 'Start stage prepare_nvme.sh' 00:21:35.607 Start stage prepare_nvme.sh 00:21:35.607 + [[ -n 1 ]] 00:21:35.607 + disk_prefix=ex1 00:21:35.607 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:21:35.607 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:21:35.607 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:21:35.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:21:35.607 ++ SPDK_TEST_NVME=1 00:21:35.607 ++ SPDK_TEST_FTL=1 00:21:35.607 ++ SPDK_TEST_ISAL=1 00:21:35.607 ++ SPDK_RUN_ASAN=1 00:21:35.607 ++ SPDK_RUN_UBSAN=1 00:21:35.607 ++ SPDK_TEST_XNVME=1 00:21:35.607 ++ SPDK_TEST_NVME_FDP=1 00:21:35.607 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:21:35.607 ++ RUN_NIGHTLY=0 00:21:35.607 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:21:35.607 + nvme_files=() 00:21:35.607 + declare -A nvme_files 00:21:35.607 + backend_dir=/var/lib/libvirt/images/backends 00:21:35.607 + nvme_files['nvme.img']=5G 00:21:35.607 + nvme_files['nvme-cmb.img']=5G 00:21:35.607 + nvme_files['nvme-multi0.img']=4G 00:21:35.607 + nvme_files['nvme-multi1.img']=4G 00:21:35.607 + nvme_files['nvme-multi2.img']=4G 00:21:35.607 + nvme_files['nvme-openstack.img']=8G 00:21:35.607 + nvme_files['nvme-zns.img']=5G 00:21:35.607 + (( SPDK_TEST_NVME_PMR == 1 )) 00:21:35.607 + (( SPDK_TEST_FTL == 1 )) 00:21:35.607 + nvme_files["nvme-ftl.img"]=6G 00:21:35.607 + (( SPDK_TEST_NVME_FDP == 1 )) 00:21:35.607 + nvme_files["nvme-fdp.img"]=1G 00:21:35.607 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:21:35.607 + for nvme in "${!nvme_files[@]}" 00:21:35.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:21:35.607 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:21:35.607 + for nvme in "${!nvme_files[@]}" 00:21:35.607 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:21:35.866 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:21:35.866 + for nvme in "${!nvme_files[@]}" 00:21:35.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:21:35.866 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:21:35.866 + for nvme in "${!nvme_files[@]}" 00:21:35.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:21:35.866 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:21:35.866 + for nvme in "${!nvme_files[@]}" 00:21:35.866 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:21:36.125 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:21:36.125 + for nvme in "${!nvme_files[@]}" 00:21:36.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:21:36.125 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:21:36.125 + for nvme in "${!nvme_files[@]}" 00:21:36.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:21:36.125 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:21:36.125 + for nvme in "${!nvme_files[@]}" 00:21:36.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:21:36.125 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:21:36.125 + for nvme in "${!nvme_files[@]}" 00:21:36.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:21:36.125 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:21:36.125 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:21:36.125 + echo 'End stage prepare_nvme.sh' 00:21:36.125 End stage prepare_nvme.sh 00:21:36.137 [Pipeline] sh 00:21:36.421 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:21:36.421 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:21:36.421 00:21:36.421 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:21:36.421 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:21:36.421 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:21:36.421 HELP=0 00:21:36.421 DRY_RUN=0 00:21:36.421 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:21:36.421 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:21:36.421 NVME_AUTO_CREATE=0 00:21:36.421 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:21:36.421 NVME_CMB=,,,, 00:21:36.421 NVME_PMR=,,,, 00:21:36.421 NVME_ZNS=,,,, 00:21:36.421 NVME_MS=true,,,, 00:21:36.421 NVME_FDP=,,,on, 00:21:36.421 SPDK_VAGRANT_DISTRO=fedora39 00:21:36.421 SPDK_VAGRANT_VMCPU=10 00:21:36.421 SPDK_VAGRANT_VMRAM=12288 00:21:36.421 SPDK_VAGRANT_PROVIDER=libvirt 00:21:36.421 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:21:36.421 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:21:36.421 SPDK_OPENSTACK_NETWORK=0 00:21:36.421 VAGRANT_PACKAGE_BOX=0 00:21:36.421 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:21:36.421 FORCE_DISTRO=true 00:21:36.421 VAGRANT_BOX_VERSION= 00:21:36.421 EXTRA_VAGRANTFILES= 00:21:36.421 NIC_MODEL=virtio 00:21:36.421 00:21:36.421 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:21:36.421 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:21:39.705 Bringing machine 'default' up with 'libvirt' provider... 00:21:40.272 ==> default: Creating image (snapshot of base box volume). 00:21:40.272 ==> default: Creating domain with the following settings... 00:21:40.272 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732080779_befbf94cbe18ed282e68 00:21:40.272 ==> default: -- Domain type: kvm 00:21:40.272 ==> default: -- Cpus: 10 00:21:40.272 ==> default: -- Feature: acpi 00:21:40.272 ==> default: -- Feature: apic 00:21:40.272 ==> default: -- Feature: pae 00:21:40.272 ==> default: -- Memory: 12288M 00:21:40.272 ==> default: -- Memory Backing: hugepages: 00:21:40.272 ==> default: -- Management MAC: 00:21:40.272 ==> default: -- Loader: 00:21:40.272 ==> default: -- Nvram: 00:21:40.272 ==> default: -- Base box: spdk/fedora39 00:21:40.272 ==> default: -- Storage pool: default 00:21:40.272 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732080779_befbf94cbe18ed282e68.img (20G) 00:21:40.272 ==> default: -- Volume Cache: default 00:21:40.272 ==> default: -- Kernel: 00:21:40.272 ==> default: -- Initrd: 00:21:40.272 ==> default: -- Graphics Type: vnc 00:21:40.272 ==> default: -- Graphics Port: -1 00:21:40.272 ==> default: -- Graphics IP: 127.0.0.1 00:21:40.272 ==> default: -- Graphics Password: Not defined 00:21:40.272 ==> default: -- Video Type: cirrus 00:21:40.272 ==> default: -- Video VRAM: 9216 00:21:40.272 ==> default: -- Sound Type: 00:21:40.272 ==> default: -- Keymap: en-us 00:21:40.272 ==> default: -- TPM Path: 00:21:40.272 ==> default: -- INPUT: type=mouse, bus=ps2 00:21:40.272 ==> default: -- Command line args: 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:21:40.272 ==> default: -> value=-drive, 00:21:40.272 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:21:40.272 ==> default: -> value=-device, 00:21:40.272 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:21:40.531 ==> default: Creating shared folders metadata... 00:21:40.531 ==> default: Starting domain. 00:21:41.931 ==> default: Waiting for domain to get an IP address... 00:22:00.030 ==> default: Waiting for SSH to become available... 00:22:00.030 ==> default: Configuring and enabling network interfaces... 00:22:02.573 default: SSH address: 192.168.121.183:22 00:22:02.573 default: SSH username: vagrant 00:22:02.573 default: SSH auth method: private key 00:22:05.147 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:22:13.264 ==> default: Mounting SSHFS shared folder... 00:22:15.799 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:22:15.799 ==> default: Checking Mount.. 00:22:17.177 ==> default: Folder Successfully Mounted! 00:22:17.177 ==> default: Running provisioner: file... 00:22:18.113 default: ~/.gitconfig => .gitconfig 00:22:18.681 00:22:18.681 SUCCESS! 00:22:18.681 00:22:18.681 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:22:18.681 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:22:18.681 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:22:18.681 00:22:18.690 [Pipeline] } 00:22:18.706 [Pipeline] // stage 00:22:18.716 [Pipeline] dir 00:22:18.717 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:22:18.718 [Pipeline] { 00:22:18.733 [Pipeline] catchError 00:22:18.735 [Pipeline] { 00:22:18.748 [Pipeline] sh 00:22:19.029 + + vagrant ssh-config --host vagrantsed 00:22:19.029 -ne /^Host/,$p 00:22:19.029 + tee ssh_conf 00:22:22.314 Host vagrant 00:22:22.314 HostName 192.168.121.183 00:22:22.314 User vagrant 00:22:22.314 Port 22 00:22:22.314 UserKnownHostsFile /dev/null 00:22:22.314 StrictHostKeyChecking no 00:22:22.314 PasswordAuthentication no 00:22:22.314 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:22:22.314 IdentitiesOnly yes 00:22:22.314 LogLevel FATAL 00:22:22.314 ForwardAgent yes 00:22:22.314 ForwardX11 yes 00:22:22.314 00:22:22.326 [Pipeline] withEnv 00:22:22.328 [Pipeline] { 00:22:22.342 [Pipeline] sh 00:22:22.620 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:22:22.620 source /etc/os-release 00:22:22.620 [[ -e /image.version ]] && img=$(< /image.version) 00:22:22.620 # Minimal, systemd-like check. 00:22:22.620 if [[ -e /.dockerenv ]]; then 00:22:22.620 # Clear garbage from the node's name: 00:22:22.620 # agt-er_autotest_547-896 -> autotest_547-896 00:22:22.620 # $HOSTNAME is the actual container id 00:22:22.620 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:22:22.620 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:22:22.620 # We can assume this is a mount from a host where container is running, 00:22:22.620 # so fetch its hostname to easily identify the target swarm worker. 00:22:22.620 container="$(< /etc/hostname) ($agent)" 00:22:22.620 else 00:22:22.620 # Fallback 00:22:22.620 container=$agent 00:22:22.620 fi 00:22:22.620 fi 00:22:22.620 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:22:22.620 00:22:22.888 [Pipeline] } 00:22:22.904 [Pipeline] // withEnv 00:22:22.915 [Pipeline] setCustomBuildProperty 00:22:22.933 [Pipeline] stage 00:22:22.935 [Pipeline] { (Tests) 00:22:22.953 [Pipeline] sh 00:22:23.233 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:22:23.505 [Pipeline] sh 00:22:23.788 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:22:24.069 [Pipeline] timeout 00:22:24.069 Timeout set to expire in 50 min 00:22:24.070 [Pipeline] { 00:22:24.084 [Pipeline] sh 00:22:24.365 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:22:24.934 HEAD is now at 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:22:24.949 [Pipeline] sh 00:22:25.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:22:25.509 [Pipeline] sh 00:22:25.797 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:22:26.076 [Pipeline] sh 00:22:26.363 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:22:26.622 ++ readlink -f spdk_repo 00:22:26.622 + DIR_ROOT=/home/vagrant/spdk_repo 00:22:26.622 + [[ -n /home/vagrant/spdk_repo ]] 00:22:26.622 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:22:26.622 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:22:26.622 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:22:26.622 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:22:26.622 + [[ -d /home/vagrant/spdk_repo/output ]] 00:22:26.622 + [[ nvme-vg-autotest == pkgdep-* ]] 00:22:26.622 + cd /home/vagrant/spdk_repo 00:22:26.622 + source /etc/os-release 00:22:26.622 ++ NAME='Fedora Linux' 00:22:26.622 ++ VERSION='39 (Cloud Edition)' 00:22:26.622 ++ ID=fedora 00:22:26.622 ++ VERSION_ID=39 00:22:26.622 ++ VERSION_CODENAME= 00:22:26.622 ++ PLATFORM_ID=platform:f39 00:22:26.622 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:22:26.622 ++ ANSI_COLOR='0;38;2;60;110;180' 00:22:26.622 ++ LOGO=fedora-logo-icon 00:22:26.622 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:22:26.622 ++ HOME_URL=https://fedoraproject.org/ 00:22:26.622 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:22:26.622 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:22:26.622 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:22:26.622 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:22:26.622 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:22:26.622 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:22:26.622 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:22:26.622 ++ SUPPORT_END=2024-11-12 00:22:26.622 ++ VARIANT='Cloud Edition' 00:22:26.622 ++ VARIANT_ID=cloud 00:22:26.622 + uname -a 00:22:26.622 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:22:26.622 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:22:26.881 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:27.449 Hugepages 00:22:27.449 node hugesize free / total 00:22:27.449 node0 1048576kB 0 / 0 00:22:27.449 node0 2048kB 0 / 0 00:22:27.449 00:22:27.449 Type BDF Vendor Device NUMA Driver Device Block devices 00:22:27.449 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:22:27.449 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:22:27.449 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:22:27.449 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:22:27.449 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:22:27.449 + rm -f /tmp/spdk-ld-path 00:22:27.449 + source autorun-spdk.conf 00:22:27.449 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:27.449 ++ SPDK_TEST_NVME=1 00:22:27.449 ++ SPDK_TEST_FTL=1 00:22:27.449 ++ SPDK_TEST_ISAL=1 00:22:27.449 ++ SPDK_RUN_ASAN=1 00:22:27.449 ++ SPDK_RUN_UBSAN=1 00:22:27.449 ++ SPDK_TEST_XNVME=1 00:22:27.449 ++ SPDK_TEST_NVME_FDP=1 00:22:27.449 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:27.449 ++ RUN_NIGHTLY=0 00:22:27.449 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:22:27.449 + [[ -n '' ]] 00:22:27.449 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:22:27.449 + for M in /var/spdk/build-*-manifest.txt 00:22:27.449 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:22:27.449 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:27.449 + for M in /var/spdk/build-*-manifest.txt 00:22:27.449 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:22:27.449 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:27.449 + for M in /var/spdk/build-*-manifest.txt 00:22:27.449 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:22:27.449 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:22:27.449 ++ uname 00:22:27.449 + [[ Linux == \L\i\n\u\x ]] 00:22:27.449 + sudo dmesg -T 00:22:27.709 + sudo dmesg --clear 00:22:27.709 + dmesg_pid=5459 00:22:27.709 + [[ Fedora Linux == FreeBSD ]] 00:22:27.709 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:27.709 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:22:27.709 + sudo dmesg -Tw 00:22:27.709 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:22:27.709 + [[ -x /usr/src/fio-static/fio ]] 00:22:27.709 + export FIO_BIN=/usr/src/fio-static/fio 00:22:27.709 + FIO_BIN=/usr/src/fio-static/fio 00:22:27.709 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:22:27.709 + [[ ! -v VFIO_QEMU_BIN ]] 00:22:27.709 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:22:27.709 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:27.709 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:22:27.709 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:22:27.709 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:27.709 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:22:27.709 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:27.709 05:33:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:22:27.709 05:33:47 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:22:27.709 05:33:47 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:22:27.709 05:33:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:22:27.709 05:33:47 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:27.969 05:33:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:22:27.969 05:33:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:27.969 05:33:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:22:27.969 05:33:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:27.969 05:33:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.969 05:33:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.969 05:33:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.969 05:33:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.969 05:33:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.969 05:33:47 -- paths/export.sh@5 -- $ export PATH 00:22:27.969 05:33:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.969 05:33:47 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:27.969 05:33:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:22:27.969 05:33:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732080827.XXXXXX 00:22:27.969 05:33:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732080827.ZBxdyf 00:22:27.970 05:33:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:22:27.970 05:33:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:22:27.970 05:33:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:27.970 05:33:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:27.970 05:33:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:27.970 05:33:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:22:27.970 05:33:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:22:27.970 05:33:47 -- common/autotest_common.sh@10 -- $ set +x 00:22:27.970 05:33:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:22:27.970 05:33:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:22:27.970 05:33:47 -- pm/common@17 -- $ local monitor 00:22:27.970 05:33:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:27.970 05:33:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:27.970 05:33:47 -- pm/common@25 -- $ sleep 1 00:22:27.970 05:33:47 -- pm/common@21 -- $ date +%s 00:22:27.970 05:33:47 -- pm/common@21 -- $ date +%s 00:22:27.970 05:33:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080827 00:22:27.970 05:33:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732080827 00:22:27.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080827_collect-cpu-load.pm.log 00:22:27.970 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732080827_collect-vmstat.pm.log 00:22:28.906 05:33:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:22:28.906 05:33:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:22:28.906 05:33:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:22:28.906 05:33:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:28.906 05:33:48 -- spdk/autobuild.sh@16 -- $ date -u 00:22:28.906 Wed Nov 20 05:33:48 AM UTC 2024 00:22:28.906 05:33:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:22:28.906 v25.01-pre-192-g57b682926 00:22:28.906 05:33:48 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:22:28.906 05:33:48 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:22:28.906 05:33:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:22:28.906 05:33:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:22:28.906 05:33:48 -- common/autotest_common.sh@10 -- $ set +x 00:22:28.906 ************************************ 00:22:28.906 START TEST asan 00:22:28.906 ************************************ 00:22:28.906 using asan 00:22:28.906 05:33:48 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:22:28.906 00:22:28.906 real 0m0.000s 00:22:28.906 user 0m0.000s 00:22:28.906 sys 0m0.000s 00:22:28.906 05:33:48 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:22:28.906 05:33:48 asan -- common/autotest_common.sh@10 -- $ set +x 00:22:28.906 ************************************ 00:22:28.906 END TEST asan 00:22:28.906 ************************************ 00:22:28.906 05:33:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:22:28.906 05:33:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:22:28.906 05:33:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:22:28.906 05:33:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:22:28.906 05:33:48 -- common/autotest_common.sh@10 -- $ set +x 00:22:28.906 ************************************ 00:22:28.906 START TEST ubsan 00:22:28.906 ************************************ 00:22:28.906 using ubsan 00:22:28.906 05:33:48 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:22:28.906 00:22:28.906 real 0m0.001s 00:22:28.906 user 0m0.000s 00:22:28.906 sys 0m0.000s 00:22:28.906 05:33:48 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:22:28.906 05:33:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:22:28.906 ************************************ 00:22:28.906 END TEST ubsan 00:22:28.906 ************************************ 00:22:29.166 05:33:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:22:29.166 05:33:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:22:29.166 05:33:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:22:29.166 05:33:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:22:29.166 05:33:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:22:29.166 05:33:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:22:29.166 05:33:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:22:29.167 05:33:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:22:29.167 05:33:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:22:29.167 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:22:29.167 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:22:29.739 Using 'verbs' RDMA provider 00:22:46.079 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:23:01.010 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:23:01.579 Creating mk/config.mk...done. 00:23:01.579 Creating mk/cc.flags.mk...done. 00:23:01.579 Type 'make' to build. 00:23:01.579 05:34:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:23:01.579 05:34:21 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:23:01.579 05:34:21 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:23:01.579 05:34:21 -- common/autotest_common.sh@10 -- $ set +x 00:23:01.579 ************************************ 00:23:01.579 START TEST make 00:23:01.579 ************************************ 00:23:01.579 05:34:21 make -- common/autotest_common.sh@1127 -- $ make -j10 00:23:02.148 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:23:02.148 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:23:02.148 meson setup builddir \ 00:23:02.148 -Dwith-libaio=enabled \ 00:23:02.148 -Dwith-liburing=enabled \ 00:23:02.148 -Dwith-libvfn=disabled \ 00:23:02.148 -Dwith-spdk=disabled \ 00:23:02.148 -Dexamples=false \ 00:23:02.148 -Dtests=false \ 00:23:02.148 -Dtools=false && \ 00:23:02.148 meson compile -C builddir && \ 00:23:02.148 cd -) 00:23:02.148 make[1]: Nothing to be done for 'all'. 00:23:04.765 The Meson build system 00:23:04.765 Version: 1.5.0 00:23:04.765 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:23:04.765 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:04.765 Build type: native build 00:23:04.765 Project name: xnvme 00:23:04.765 Project version: 0.7.5 00:23:04.765 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:04.765 C linker for the host machine: cc ld.bfd 2.40-14 00:23:04.765 Host machine cpu family: x86_64 00:23:04.765 Host machine cpu: x86_64 00:23:04.765 Message: host_machine.system: linux 00:23:04.765 Compiler for C supports arguments -Wno-missing-braces: YES 00:23:04.765 Compiler for C supports arguments -Wno-cast-function-type: YES 00:23:04.765 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:23:04.765 Run-time dependency threads found: YES 00:23:04.765 Has header "setupapi.h" : NO 00:23:04.765 Has header "linux/blkzoned.h" : YES 00:23:04.765 Has header "linux/blkzoned.h" : YES (cached) 00:23:04.765 Has header "libaio.h" : YES 00:23:04.765 Library aio found: YES 00:23:04.765 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:04.765 Run-time dependency liburing found: YES 2.2 00:23:04.765 Dependency libvfn skipped: feature with-libvfn disabled 00:23:04.765 Found CMake: /usr/bin/cmake (3.27.7) 00:23:04.765 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:23:04.765 Subproject spdk : skipped: feature with-spdk disabled 00:23:04.765 Run-time dependency appleframeworks found: NO (tried framework) 00:23:04.765 Run-time dependency appleframeworks found: NO (tried framework) 00:23:04.765 Library rt found: YES 00:23:04.765 Checking for function "clock_gettime" with dependency -lrt: YES 00:23:04.765 Configuring xnvme_config.h using configuration 00:23:04.765 Configuring xnvme.spec using configuration 00:23:04.765 Run-time dependency bash-completion found: YES 2.11 00:23:04.765 Message: Bash-completions: /usr/share/bash-completion/completions 00:23:04.765 Program cp found: YES (/usr/bin/cp) 00:23:04.765 Build targets in project: 3 00:23:04.765 00:23:04.765 xnvme 0.7.5 00:23:04.765 00:23:04.765 Subprojects 00:23:04.765 spdk : NO Feature 'with-spdk' disabled 00:23:04.765 00:23:04.765 User defined options 00:23:04.765 examples : false 00:23:04.765 tests : false 00:23:04.765 tools : false 00:23:04.765 with-libaio : enabled 00:23:04.765 with-liburing: enabled 00:23:04.765 with-libvfn : disabled 00:23:04.765 with-spdk : disabled 00:23:04.765 00:23:04.765 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:05.024 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:23:05.024 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:23:05.283 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:23:05.283 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:23:05.283 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:23:05.284 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:23:05.284 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:23:05.284 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:23:05.284 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:23:05.284 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:23:05.284 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:23:05.284 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:23:05.284 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:23:05.284 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:23:05.284 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:23:05.284 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:23:05.284 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:23:05.284 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:23:05.543 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:23:05.543 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:23:05.543 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:23:05.543 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:23:05.543 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:23:05.543 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:23:05.543 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:23:05.543 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:23:05.543 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:23:05.543 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:23:05.543 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:23:05.543 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:23:05.543 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:23:05.543 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:23:05.543 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:23:05.543 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:23:05.543 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:23:05.543 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:23:05.543 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:23:05.543 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:23:05.543 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:23:05.543 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:23:05.543 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:23:05.543 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:23:05.543 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:23:05.543 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:23:05.543 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:23:05.543 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:23:05.543 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:23:05.543 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:23:05.543 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:23:05.802 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:23:05.802 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:23:05.802 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:23:05.802 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:23:05.802 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:23:05.802 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:23:05.802 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:23:05.802 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:23:05.802 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:23:05.802 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:23:05.802 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:23:05.802 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:23:05.802 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:23:05.802 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:23:05.802 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:23:06.062 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:23:06.062 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:23:06.062 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:23:06.062 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:23:06.062 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:23:06.062 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:23:06.062 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:23:06.062 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:23:06.062 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:23:06.062 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:23:06.628 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:23:06.628 [75/76] Linking static target lib/libxnvme.a 00:23:06.628 [76/76] Linking target lib/libxnvme.so.0.7.5 00:23:06.628 INFO: autodetecting backend as ninja 00:23:06.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:23:06.628 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:23:14.746 The Meson build system 00:23:14.746 Version: 1.5.0 00:23:14.746 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:23:14.746 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:23:14.746 Build type: native build 00:23:14.746 Program cat found: YES (/usr/bin/cat) 00:23:14.746 Project name: DPDK 00:23:14.746 Project version: 24.03.0 00:23:14.746 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:23:14.746 C linker for the host machine: cc ld.bfd 2.40-14 00:23:14.746 Host machine cpu family: x86_64 00:23:14.746 Host machine cpu: x86_64 00:23:14.746 Message: ## Building in Developer Mode ## 00:23:14.746 Program pkg-config found: YES (/usr/bin/pkg-config) 00:23:14.746 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:23:14.746 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:23:14.746 Program python3 found: YES (/usr/bin/python3) 00:23:14.746 Program cat found: YES (/usr/bin/cat) 00:23:14.746 Compiler for C supports arguments -march=native: YES 00:23:14.746 Checking for size of "void *" : 8 00:23:14.746 Checking for size of "void *" : 8 (cached) 00:23:14.746 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:23:14.746 Library m found: YES 00:23:14.746 Library numa found: YES 00:23:14.746 Has header "numaif.h" : YES 00:23:14.746 Library fdt found: NO 00:23:14.746 Library execinfo found: NO 00:23:14.746 Has header "execinfo.h" : YES 00:23:14.746 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:23:14.746 Run-time dependency libarchive found: NO (tried pkgconfig) 00:23:14.746 Run-time dependency libbsd found: NO (tried pkgconfig) 00:23:14.746 Run-time dependency jansson found: NO (tried pkgconfig) 00:23:14.746 Run-time dependency openssl found: YES 3.1.1 00:23:14.746 Run-time dependency libpcap found: YES 1.10.4 00:23:14.746 Has header "pcap.h" with dependency libpcap: YES 00:23:14.746 Compiler for C supports arguments -Wcast-qual: YES 00:23:14.746 Compiler for C supports arguments -Wdeprecated: YES 00:23:14.746 Compiler for C supports arguments -Wformat: YES 00:23:14.746 Compiler for C supports arguments -Wformat-nonliteral: NO 00:23:14.746 Compiler for C supports arguments -Wformat-security: NO 00:23:14.746 Compiler for C supports arguments -Wmissing-declarations: YES 00:23:14.746 Compiler for C supports arguments -Wmissing-prototypes: YES 00:23:14.746 Compiler for C supports arguments -Wnested-externs: YES 00:23:14.746 Compiler for C supports arguments -Wold-style-definition: YES 00:23:14.746 Compiler for C supports arguments -Wpointer-arith: YES 00:23:14.746 Compiler for C supports arguments -Wsign-compare: YES 00:23:14.746 Compiler for C supports arguments -Wstrict-prototypes: YES 00:23:14.746 Compiler for C supports arguments -Wundef: YES 00:23:14.746 Compiler for C supports arguments -Wwrite-strings: YES 00:23:14.746 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:23:14.746 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:23:14.746 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:23:14.746 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:23:14.746 Program objdump found: YES (/usr/bin/objdump) 00:23:14.746 Compiler for C supports arguments -mavx512f: YES 00:23:14.746 Checking if "AVX512 checking" compiles: YES 00:23:14.746 Fetching value of define "__SSE4_2__" : 1 00:23:14.746 Fetching value of define "__AES__" : 1 00:23:14.746 Fetching value of define "__AVX__" : 1 00:23:14.746 Fetching value of define "__AVX2__" : 1 00:23:14.746 Fetching value of define "__AVX512BW__" : 1 00:23:14.746 Fetching value of define "__AVX512CD__" : 1 00:23:14.746 Fetching value of define "__AVX512DQ__" : 1 00:23:14.746 Fetching value of define "__AVX512F__" : 1 00:23:14.746 Fetching value of define "__AVX512VL__" : 1 00:23:14.746 Fetching value of define "__PCLMUL__" : 1 00:23:14.746 Fetching value of define "__RDRND__" : 1 00:23:14.746 Fetching value of define "__RDSEED__" : 1 00:23:14.746 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:23:14.746 Fetching value of define "__znver1__" : (undefined) 00:23:14.746 Fetching value of define "__znver2__" : (undefined) 00:23:14.746 Fetching value of define "__znver3__" : (undefined) 00:23:14.746 Fetching value of define "__znver4__" : (undefined) 00:23:14.746 Library asan found: YES 00:23:14.746 Compiler for C supports arguments -Wno-format-truncation: YES 00:23:14.746 Message: lib/log: Defining dependency "log" 00:23:14.746 Message: lib/kvargs: Defining dependency "kvargs" 00:23:14.746 Message: lib/telemetry: Defining dependency "telemetry" 00:23:14.746 Library rt found: YES 00:23:14.746 Checking for function "getentropy" : NO 00:23:14.746 Message: lib/eal: Defining dependency "eal" 00:23:14.746 Message: lib/ring: Defining dependency "ring" 00:23:14.746 Message: lib/rcu: Defining dependency "rcu" 00:23:14.746 Message: lib/mempool: Defining dependency "mempool" 00:23:14.747 Message: lib/mbuf: Defining dependency "mbuf" 00:23:14.747 Fetching value of define "__PCLMUL__" : 1 (cached) 00:23:14.747 Fetching value of define "__AVX512F__" : 1 (cached) 00:23:14.747 Fetching value of define "__AVX512BW__" : 1 (cached) 00:23:14.747 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:23:14.747 Fetching value of define "__AVX512VL__" : 1 (cached) 00:23:14.747 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:23:14.747 Compiler for C supports arguments -mpclmul: YES 00:23:14.747 Compiler for C supports arguments -maes: YES 00:23:14.747 Compiler for C supports arguments -mavx512f: YES (cached) 00:23:14.747 Compiler for C supports arguments -mavx512bw: YES 00:23:14.747 Compiler for C supports arguments -mavx512dq: YES 00:23:14.747 Compiler for C supports arguments -mavx512vl: YES 00:23:14.747 Compiler for C supports arguments -mvpclmulqdq: YES 00:23:14.747 Compiler for C supports arguments -mavx2: YES 00:23:14.747 Compiler for C supports arguments -mavx: YES 00:23:14.747 Message: lib/net: Defining dependency "net" 00:23:14.747 Message: lib/meter: Defining dependency "meter" 00:23:14.747 Message: lib/ethdev: Defining dependency "ethdev" 00:23:14.747 Message: lib/pci: Defining dependency "pci" 00:23:14.747 Message: lib/cmdline: Defining dependency "cmdline" 00:23:14.747 Message: lib/hash: Defining dependency "hash" 00:23:14.747 Message: lib/timer: Defining dependency "timer" 00:23:14.747 Message: lib/compressdev: Defining dependency "compressdev" 00:23:14.747 Message: lib/cryptodev: Defining dependency "cryptodev" 00:23:14.747 Message: lib/dmadev: Defining dependency "dmadev" 00:23:14.747 Compiler for C supports arguments -Wno-cast-qual: YES 00:23:14.747 Message: lib/power: Defining dependency "power" 00:23:14.747 Message: lib/reorder: Defining dependency "reorder" 00:23:14.747 Message: lib/security: Defining dependency "security" 00:23:14.747 Has header "linux/userfaultfd.h" : YES 00:23:14.747 Has header "linux/vduse.h" : YES 00:23:14.747 Message: lib/vhost: Defining dependency "vhost" 00:23:14.747 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:23:14.747 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:23:14.747 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:23:14.747 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:23:14.747 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:23:14.747 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:23:14.747 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:23:14.747 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:23:14.747 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:23:14.747 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:23:14.747 Program doxygen found: YES (/usr/local/bin/doxygen) 00:23:14.747 Configuring doxy-api-html.conf using configuration 00:23:14.747 Configuring doxy-api-man.conf using configuration 00:23:14.747 Program mandb found: YES (/usr/bin/mandb) 00:23:14.747 Program sphinx-build found: NO 00:23:14.747 Configuring rte_build_config.h using configuration 00:23:14.747 Message: 00:23:14.747 ================= 00:23:14.747 Applications Enabled 00:23:14.747 ================= 00:23:14.747 00:23:14.747 apps: 00:23:14.747 00:23:14.747 00:23:14.747 Message: 00:23:14.747 ================= 00:23:14.747 Libraries Enabled 00:23:14.747 ================= 00:23:14.747 00:23:14.747 libs: 00:23:14.747 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:23:14.747 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:23:14.747 cryptodev, dmadev, power, reorder, security, vhost, 00:23:14.747 00:23:14.747 Message: 00:23:14.747 =============== 00:23:14.747 Drivers Enabled 00:23:14.747 =============== 00:23:14.747 00:23:14.747 common: 00:23:14.747 00:23:14.747 bus: 00:23:14.747 pci, vdev, 00:23:14.747 mempool: 00:23:14.747 ring, 00:23:14.747 dma: 00:23:14.747 00:23:14.747 net: 00:23:14.747 00:23:14.747 crypto: 00:23:14.747 00:23:14.747 compress: 00:23:14.747 00:23:14.747 vdpa: 00:23:14.747 00:23:14.747 00:23:14.747 Message: 00:23:14.747 ================= 00:23:14.747 Content Skipped 00:23:14.747 ================= 00:23:14.747 00:23:14.747 apps: 00:23:14.747 dumpcap: explicitly disabled via build config 00:23:14.747 graph: explicitly disabled via build config 00:23:14.747 pdump: explicitly disabled via build config 00:23:14.747 proc-info: explicitly disabled via build config 00:23:14.747 test-acl: explicitly disabled via build config 00:23:14.747 test-bbdev: explicitly disabled via build config 00:23:14.747 test-cmdline: explicitly disabled via build config 00:23:14.747 test-compress-perf: explicitly disabled via build config 00:23:14.747 test-crypto-perf: explicitly disabled via build config 00:23:14.747 test-dma-perf: explicitly disabled via build config 00:23:14.747 test-eventdev: explicitly disabled via build config 00:23:14.747 test-fib: explicitly disabled via build config 00:23:14.747 test-flow-perf: explicitly disabled via build config 00:23:14.747 test-gpudev: explicitly disabled via build config 00:23:14.747 test-mldev: explicitly disabled via build config 00:23:14.747 test-pipeline: explicitly disabled via build config 00:23:14.747 test-pmd: explicitly disabled via build config 00:23:14.747 test-regex: explicitly disabled via build config 00:23:14.747 test-sad: explicitly disabled via build config 00:23:14.747 test-security-perf: explicitly disabled via build config 00:23:14.747 00:23:14.747 libs: 00:23:14.747 argparse: explicitly disabled via build config 00:23:14.747 metrics: explicitly disabled via build config 00:23:14.747 acl: explicitly disabled via build config 00:23:14.747 bbdev: explicitly disabled via build config 00:23:14.747 bitratestats: explicitly disabled via build config 00:23:14.747 bpf: explicitly disabled via build config 00:23:14.747 cfgfile: explicitly disabled via build config 00:23:14.747 distributor: explicitly disabled via build config 00:23:14.747 efd: explicitly disabled via build config 00:23:14.747 eventdev: explicitly disabled via build config 00:23:14.747 dispatcher: explicitly disabled via build config 00:23:14.747 gpudev: explicitly disabled via build config 00:23:14.747 gro: explicitly disabled via build config 00:23:14.747 gso: explicitly disabled via build config 00:23:14.747 ip_frag: explicitly disabled via build config 00:23:14.747 jobstats: explicitly disabled via build config 00:23:14.747 latencystats: explicitly disabled via build config 00:23:14.747 lpm: explicitly disabled via build config 00:23:14.747 member: explicitly disabled via build config 00:23:14.747 pcapng: explicitly disabled via build config 00:23:14.747 rawdev: explicitly disabled via build config 00:23:14.747 regexdev: explicitly disabled via build config 00:23:14.747 mldev: explicitly disabled via build config 00:23:14.747 rib: explicitly disabled via build config 00:23:14.747 sched: explicitly disabled via build config 00:23:14.747 stack: explicitly disabled via build config 00:23:14.747 ipsec: explicitly disabled via build config 00:23:14.747 pdcp: explicitly disabled via build config 00:23:14.747 fib: explicitly disabled via build config 00:23:14.747 port: explicitly disabled via build config 00:23:14.747 pdump: explicitly disabled via build config 00:23:14.747 table: explicitly disabled via build config 00:23:14.747 pipeline: explicitly disabled via build config 00:23:14.747 graph: explicitly disabled via build config 00:23:14.747 node: explicitly disabled via build config 00:23:14.747 00:23:14.747 drivers: 00:23:14.747 common/cpt: not in enabled drivers build config 00:23:14.747 common/dpaax: not in enabled drivers build config 00:23:14.747 common/iavf: not in enabled drivers build config 00:23:14.747 common/idpf: not in enabled drivers build config 00:23:14.747 common/ionic: not in enabled drivers build config 00:23:14.747 common/mvep: not in enabled drivers build config 00:23:14.747 common/octeontx: not in enabled drivers build config 00:23:14.747 bus/auxiliary: not in enabled drivers build config 00:23:14.747 bus/cdx: not in enabled drivers build config 00:23:14.747 bus/dpaa: not in enabled drivers build config 00:23:14.747 bus/fslmc: not in enabled drivers build config 00:23:14.747 bus/ifpga: not in enabled drivers build config 00:23:14.748 bus/platform: not in enabled drivers build config 00:23:14.748 bus/uacce: not in enabled drivers build config 00:23:14.748 bus/vmbus: not in enabled drivers build config 00:23:14.748 common/cnxk: not in enabled drivers build config 00:23:14.748 common/mlx5: not in enabled drivers build config 00:23:14.748 common/nfp: not in enabled drivers build config 00:23:14.748 common/nitrox: not in enabled drivers build config 00:23:14.748 common/qat: not in enabled drivers build config 00:23:14.748 common/sfc_efx: not in enabled drivers build config 00:23:14.748 mempool/bucket: not in enabled drivers build config 00:23:14.748 mempool/cnxk: not in enabled drivers build config 00:23:14.748 mempool/dpaa: not in enabled drivers build config 00:23:14.748 mempool/dpaa2: not in enabled drivers build config 00:23:14.748 mempool/octeontx: not in enabled drivers build config 00:23:14.748 mempool/stack: not in enabled drivers build config 00:23:14.748 dma/cnxk: not in enabled drivers build config 00:23:14.748 dma/dpaa: not in enabled drivers build config 00:23:14.748 dma/dpaa2: not in enabled drivers build config 00:23:14.748 dma/hisilicon: not in enabled drivers build config 00:23:14.748 dma/idxd: not in enabled drivers build config 00:23:14.748 dma/ioat: not in enabled drivers build config 00:23:14.748 dma/skeleton: not in enabled drivers build config 00:23:14.748 net/af_packet: not in enabled drivers build config 00:23:14.748 net/af_xdp: not in enabled drivers build config 00:23:14.748 net/ark: not in enabled drivers build config 00:23:14.748 net/atlantic: not in enabled drivers build config 00:23:14.748 net/avp: not in enabled drivers build config 00:23:14.748 net/axgbe: not in enabled drivers build config 00:23:14.748 net/bnx2x: not in enabled drivers build config 00:23:14.748 net/bnxt: not in enabled drivers build config 00:23:14.748 net/bonding: not in enabled drivers build config 00:23:14.748 net/cnxk: not in enabled drivers build config 00:23:14.748 net/cpfl: not in enabled drivers build config 00:23:14.748 net/cxgbe: not in enabled drivers build config 00:23:14.748 net/dpaa: not in enabled drivers build config 00:23:14.748 net/dpaa2: not in enabled drivers build config 00:23:14.748 net/e1000: not in enabled drivers build config 00:23:14.748 net/ena: not in enabled drivers build config 00:23:14.748 net/enetc: not in enabled drivers build config 00:23:14.748 net/enetfec: not in enabled drivers build config 00:23:14.748 net/enic: not in enabled drivers build config 00:23:14.748 net/failsafe: not in enabled drivers build config 00:23:14.748 net/fm10k: not in enabled drivers build config 00:23:14.748 net/gve: not in enabled drivers build config 00:23:14.748 net/hinic: not in enabled drivers build config 00:23:14.748 net/hns3: not in enabled drivers build config 00:23:14.748 net/i40e: not in enabled drivers build config 00:23:14.748 net/iavf: not in enabled drivers build config 00:23:14.748 net/ice: not in enabled drivers build config 00:23:14.748 net/idpf: not in enabled drivers build config 00:23:14.748 net/igc: not in enabled drivers build config 00:23:14.748 net/ionic: not in enabled drivers build config 00:23:14.748 net/ipn3ke: not in enabled drivers build config 00:23:14.748 net/ixgbe: not in enabled drivers build config 00:23:14.748 net/mana: not in enabled drivers build config 00:23:14.748 net/memif: not in enabled drivers build config 00:23:14.748 net/mlx4: not in enabled drivers build config 00:23:14.748 net/mlx5: not in enabled drivers build config 00:23:14.748 net/mvneta: not in enabled drivers build config 00:23:14.748 net/mvpp2: not in enabled drivers build config 00:23:14.748 net/netvsc: not in enabled drivers build config 00:23:14.748 net/nfb: not in enabled drivers build config 00:23:14.748 net/nfp: not in enabled drivers build config 00:23:14.748 net/ngbe: not in enabled drivers build config 00:23:14.748 net/null: not in enabled drivers build config 00:23:14.748 net/octeontx: not in enabled drivers build config 00:23:14.748 net/octeon_ep: not in enabled drivers build config 00:23:14.748 net/pcap: not in enabled drivers build config 00:23:14.748 net/pfe: not in enabled drivers build config 00:23:14.748 net/qede: not in enabled drivers build config 00:23:14.748 net/ring: not in enabled drivers build config 00:23:14.748 net/sfc: not in enabled drivers build config 00:23:14.748 net/softnic: not in enabled drivers build config 00:23:14.748 net/tap: not in enabled drivers build config 00:23:14.748 net/thunderx: not in enabled drivers build config 00:23:14.748 net/txgbe: not in enabled drivers build config 00:23:14.748 net/vdev_netvsc: not in enabled drivers build config 00:23:14.748 net/vhost: not in enabled drivers build config 00:23:14.748 net/virtio: not in enabled drivers build config 00:23:14.748 net/vmxnet3: not in enabled drivers build config 00:23:14.748 raw/*: missing internal dependency, "rawdev" 00:23:14.748 crypto/armv8: not in enabled drivers build config 00:23:14.748 crypto/bcmfs: not in enabled drivers build config 00:23:14.748 crypto/caam_jr: not in enabled drivers build config 00:23:14.748 crypto/ccp: not in enabled drivers build config 00:23:14.748 crypto/cnxk: not in enabled drivers build config 00:23:14.748 crypto/dpaa_sec: not in enabled drivers build config 00:23:14.748 crypto/dpaa2_sec: not in enabled drivers build config 00:23:14.748 crypto/ipsec_mb: not in enabled drivers build config 00:23:14.748 crypto/mlx5: not in enabled drivers build config 00:23:14.748 crypto/mvsam: not in enabled drivers build config 00:23:14.748 crypto/nitrox: not in enabled drivers build config 00:23:14.748 crypto/null: not in enabled drivers build config 00:23:14.748 crypto/octeontx: not in enabled drivers build config 00:23:14.748 crypto/openssl: not in enabled drivers build config 00:23:14.748 crypto/scheduler: not in enabled drivers build config 00:23:14.748 crypto/uadk: not in enabled drivers build config 00:23:14.748 crypto/virtio: not in enabled drivers build config 00:23:14.748 compress/isal: not in enabled drivers build config 00:23:14.748 compress/mlx5: not in enabled drivers build config 00:23:14.748 compress/nitrox: not in enabled drivers build config 00:23:14.748 compress/octeontx: not in enabled drivers build config 00:23:14.748 compress/zlib: not in enabled drivers build config 00:23:14.748 regex/*: missing internal dependency, "regexdev" 00:23:14.748 ml/*: missing internal dependency, "mldev" 00:23:14.748 vdpa/ifc: not in enabled drivers build config 00:23:14.748 vdpa/mlx5: not in enabled drivers build config 00:23:14.748 vdpa/nfp: not in enabled drivers build config 00:23:14.748 vdpa/sfc: not in enabled drivers build config 00:23:14.748 event/*: missing internal dependency, "eventdev" 00:23:14.748 baseband/*: missing internal dependency, "bbdev" 00:23:14.748 gpu/*: missing internal dependency, "gpudev" 00:23:14.748 00:23:14.748 00:23:14.748 Build targets in project: 85 00:23:14.748 00:23:14.748 DPDK 24.03.0 00:23:14.748 00:23:14.748 User defined options 00:23:14.748 buildtype : debug 00:23:14.748 default_library : shared 00:23:14.748 libdir : lib 00:23:14.748 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:14.748 b_sanitize : address 00:23:14.748 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:23:14.748 c_link_args : 00:23:14.748 cpu_instruction_set: native 00:23:14.748 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:23:14.748 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:23:14.748 enable_docs : false 00:23:14.748 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:23:14.748 enable_kmods : false 00:23:14.748 max_lcores : 128 00:23:14.748 tests : false 00:23:14.749 00:23:14.749 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:23:15.317 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:23:15.317 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:23:15.317 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:23:15.317 [3/268] Linking static target lib/librte_kvargs.a 00:23:15.575 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:23:15.575 [5/268] Linking static target lib/librte_log.a 00:23:15.575 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:23:15.834 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:23:15.834 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:23:15.834 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:23:16.093 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:23:16.093 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:23:16.093 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:23:16.093 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:23:16.093 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:23:16.093 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:23:16.352 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:23:16.352 [17/268] Linking static target lib/librte_telemetry.a 00:23:16.352 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:23:16.609 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:23:16.609 [20/268] Linking target lib/librte_log.so.24.1 00:23:16.609 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:23:16.609 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:23:16.868 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:23:16.868 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:23:16.868 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:23:16.868 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:23:16.868 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:23:16.868 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:23:16.868 [29/268] Linking target lib/librte_kvargs.so.24.1 00:23:16.868 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:23:17.161 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:23:17.161 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:23:17.161 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:23:17.161 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:23:17.161 [35/268] Linking target lib/librte_telemetry.so.24.1 00:23:17.420 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:23:17.420 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:23:17.420 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:23:17.420 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:23:17.420 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:23:17.420 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:23:17.679 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:23:17.679 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:23:17.679 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:23:17.679 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:23:17.679 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:23:18.247 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:23:18.247 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:23:18.247 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:23:18.247 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:23:18.247 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:23:18.247 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:23:18.506 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:23:18.506 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:23:18.506 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:23:18.506 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:23:18.506 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:23:18.765 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:23:18.765 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:23:18.765 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:23:19.024 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:23:19.024 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:23:19.024 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:23:19.024 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:23:19.283 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:23:19.283 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:23:19.283 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:23:19.283 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:23:19.542 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:23:19.801 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:23:19.801 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:23:19.801 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:23:19.801 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:23:19.801 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:23:19.801 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:23:19.801 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:23:19.801 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:23:20.060 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:23:20.060 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:23:20.060 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:23:20.060 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:23:20.319 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:23:20.319 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:23:20.319 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:23:20.578 [85/268] Linking static target lib/librte_eal.a 00:23:20.578 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:23:20.578 [87/268] Linking static target lib/librte_ring.a 00:23:20.578 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:23:20.579 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:23:20.837 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:23:20.837 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:23:20.837 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:23:20.837 [93/268] Linking static target lib/librte_mempool.a 00:23:20.837 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:23:21.095 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:23:21.095 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:23:21.095 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:23:21.095 [98/268] Linking static target lib/librte_rcu.a 00:23:21.095 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:23:21.095 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:23:21.355 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:23:21.614 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:23:21.614 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:23:21.614 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:23:21.614 [105/268] Linking static target lib/librte_meter.a 00:23:21.614 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:23:21.614 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:23:21.873 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:23:21.873 [109/268] Linking static target lib/librte_net.a 00:23:21.873 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:23:21.873 [111/268] Linking static target lib/librte_mbuf.a 00:23:21.873 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:23:21.873 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:23:22.133 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:23:22.133 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:23:22.133 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:23:22.392 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:23:22.392 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:23:22.652 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:23:22.652 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:23:22.911 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:23:22.911 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:23:22.911 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:23:23.481 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:23:23.481 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:23:23.481 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:23:23.481 [127/268] Linking static target lib/librte_pci.a 00:23:23.481 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:23:23.481 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:23:23.481 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:23:23.481 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:23:23.741 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:23:23.741 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:23:23.741 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:23:23.741 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:23:23.741 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:23:23.741 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:23:23.741 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:23:23.741 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:23:23.741 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:23:23.741 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:23.741 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:23:24.001 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:23:24.001 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:23:24.001 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:23:24.001 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:23:24.001 [147/268] Linking static target lib/librte_cmdline.a 00:23:24.260 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:23:24.260 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:23:24.520 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:23:24.520 [151/268] Linking static target lib/librte_timer.a 00:23:24.520 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:23:24.520 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:23:24.779 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:23:24.779 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:23:24.779 [156/268] Linking static target lib/librte_ethdev.a 00:23:24.779 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:23:25.040 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:23:25.040 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:23:25.040 [160/268] Linking static target lib/librte_compressdev.a 00:23:25.040 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:23:25.040 [162/268] Linking static target lib/librte_hash.a 00:23:25.040 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:23:25.040 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:23:25.300 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:23:25.559 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:23:25.559 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:23:25.559 [168/268] Linking static target lib/librte_dmadev.a 00:23:25.559 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:23:25.819 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:23:25.819 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:23:25.819 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:23:25.819 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:23:26.079 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:26.079 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:23:26.079 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:23:26.079 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:23:26.338 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:23:26.338 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:23:26.338 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:23:26.338 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:23:26.338 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:26.598 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:23:26.598 [184/268] Linking static target lib/librte_cryptodev.a 00:23:26.857 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:23:26.857 [186/268] Linking static target lib/librte_power.a 00:23:26.857 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:23:26.857 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:23:26.857 [189/268] Linking static target lib/librte_reorder.a 00:23:26.857 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:23:27.116 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:23:27.116 [192/268] Linking static target lib/librte_security.a 00:23:27.375 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:23:27.375 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:23:27.633 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:23:27.893 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:23:28.154 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:23:28.154 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:23:28.154 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:23:28.154 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:23:28.154 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:23:28.412 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:23:28.412 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:23:28.671 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:23:28.671 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:23:28.931 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:23:28.931 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:23:28.931 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:23:28.931 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:23:28.931 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:23:29.190 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:23:29.190 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:29.190 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:23:29.190 [214/268] Linking static target drivers/librte_bus_pci.a 00:23:29.190 [215/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:29.190 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:23:29.449 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:29.449 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:23:29.449 [219/268] Linking static target drivers/librte_bus_vdev.a 00:23:29.449 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:23:29.449 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:23:29.708 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:23:29.708 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:29.708 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:23:29.708 [225/268] Linking static target drivers/librte_mempool_ring.a 00:23:29.708 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:29.966 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:23:31.343 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:23:31.909 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:23:32.169 [230/268] Linking target lib/librte_eal.so.24.1 00:23:32.169 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:23:32.427 [232/268] Linking target lib/librte_timer.so.24.1 00:23:32.428 [233/268] Linking target lib/librte_dmadev.so.24.1 00:23:32.428 [234/268] Linking target lib/librte_meter.so.24.1 00:23:32.428 [235/268] Linking target lib/librte_ring.so.24.1 00:23:32.428 [236/268] Linking target lib/librte_pci.so.24.1 00:23:32.428 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:23:32.428 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:23:32.428 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:23:32.428 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:23:32.428 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:23:32.428 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:23:32.688 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:23:32.688 [244/268] Linking target lib/librte_rcu.so.24.1 00:23:32.688 [245/268] Linking target lib/librte_mempool.so.24.1 00:23:32.688 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:23:32.688 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:23:32.688 [248/268] Linking target lib/librte_mbuf.so.24.1 00:23:32.688 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:23:32.948 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:23:32.948 [251/268] Linking target lib/librte_net.so.24.1 00:23:32.948 [252/268] Linking target lib/librte_reorder.so.24.1 00:23:32.948 [253/268] Linking target lib/librte_compressdev.so.24.1 00:23:32.948 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:23:33.208 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:23:33.208 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:23:33.208 [257/268] Linking target lib/librte_cmdline.so.24.1 00:23:33.208 [258/268] Linking target lib/librte_security.so.24.1 00:23:33.208 [259/268] Linking target lib/librte_hash.so.24.1 00:23:33.467 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:23:34.036 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:23:34.036 [262/268] Linking target lib/librte_ethdev.so.24.1 00:23:34.036 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:23:34.296 [264/268] Linking target lib/librte_power.so.24.1 00:23:35.235 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:23:35.235 [266/268] Linking static target lib/librte_vhost.a 00:23:37.768 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:23:37.768 [268/268] Linking target lib/librte_vhost.so.24.1 00:23:37.768 INFO: autodetecting backend as ninja 00:23:37.768 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:24:04.352 CC lib/ut_mock/mock.o 00:24:04.352 CC lib/ut/ut.o 00:24:04.352 CC lib/log/log.o 00:24:04.352 CC lib/log/log_flags.o 00:24:04.352 CC lib/log/log_deprecated.o 00:24:04.352 LIB libspdk_ut.a 00:24:04.352 LIB libspdk_ut_mock.a 00:24:04.352 SO libspdk_ut.so.2.0 00:24:04.352 LIB libspdk_log.a 00:24:04.352 SO libspdk_ut_mock.so.6.0 00:24:04.352 SYMLINK libspdk_ut.so 00:24:04.352 SO libspdk_log.so.7.1 00:24:04.352 SYMLINK libspdk_ut_mock.so 00:24:04.352 SYMLINK libspdk_log.so 00:24:04.352 CC lib/dma/dma.o 00:24:04.352 CXX lib/trace_parser/trace.o 00:24:04.352 CC lib/ioat/ioat.o 00:24:04.352 CC lib/util/base64.o 00:24:04.352 CC lib/util/bit_array.o 00:24:04.352 CC lib/util/cpuset.o 00:24:04.352 CC lib/util/crc32.o 00:24:04.352 CC lib/util/crc32c.o 00:24:04.352 CC lib/util/crc16.o 00:24:04.352 CC lib/vfio_user/host/vfio_user_pci.o 00:24:04.352 CC lib/vfio_user/host/vfio_user.o 00:24:04.352 CC lib/util/crc32_ieee.o 00:24:04.352 CC lib/util/crc64.o 00:24:04.352 LIB libspdk_dma.a 00:24:04.352 CC lib/util/dif.o 00:24:04.352 SO libspdk_dma.so.5.0 00:24:04.352 CC lib/util/fd.o 00:24:04.352 CC lib/util/fd_group.o 00:24:04.352 SYMLINK libspdk_dma.so 00:24:04.352 CC lib/util/file.o 00:24:04.352 CC lib/util/hexlify.o 00:24:04.352 CC lib/util/iov.o 00:24:04.352 LIB libspdk_ioat.a 00:24:04.352 SO libspdk_ioat.so.7.0 00:24:04.352 CC lib/util/math.o 00:24:04.352 CC lib/util/net.o 00:24:04.352 LIB libspdk_vfio_user.a 00:24:04.352 SYMLINK libspdk_ioat.so 00:24:04.352 CC lib/util/pipe.o 00:24:04.352 SO libspdk_vfio_user.so.5.0 00:24:04.352 CC lib/util/strerror_tls.o 00:24:04.352 CC lib/util/string.o 00:24:04.352 SYMLINK libspdk_vfio_user.so 00:24:04.352 CC lib/util/uuid.o 00:24:04.352 CC lib/util/xor.o 00:24:04.352 CC lib/util/zipf.o 00:24:04.352 CC lib/util/md5.o 00:24:04.352 LIB libspdk_util.a 00:24:04.352 SO libspdk_util.so.10.1 00:24:04.352 LIB libspdk_trace_parser.a 00:24:04.352 SO libspdk_trace_parser.so.6.0 00:24:04.352 SYMLINK libspdk_util.so 00:24:04.352 SYMLINK libspdk_trace_parser.so 00:24:04.352 CC lib/env_dpdk/env.o 00:24:04.352 CC lib/env_dpdk/memory.o 00:24:04.352 CC lib/env_dpdk/init.o 00:24:04.352 CC lib/env_dpdk/threads.o 00:24:04.352 CC lib/env_dpdk/pci.o 00:24:04.352 CC lib/conf/conf.o 00:24:04.352 CC lib/rdma_utils/rdma_utils.o 00:24:04.352 CC lib/idxd/idxd.o 00:24:04.352 CC lib/vmd/vmd.o 00:24:04.352 CC lib/json/json_parse.o 00:24:04.352 CC lib/env_dpdk/pci_ioat.o 00:24:04.352 LIB libspdk_conf.a 00:24:04.352 SO libspdk_conf.so.6.0 00:24:04.352 CC lib/env_dpdk/pci_virtio.o 00:24:04.352 LIB libspdk_rdma_utils.a 00:24:04.352 SO libspdk_rdma_utils.so.1.0 00:24:04.352 CC lib/json/json_util.o 00:24:04.352 SYMLINK libspdk_conf.so 00:24:04.352 CC lib/vmd/led.o 00:24:04.352 SYMLINK libspdk_rdma_utils.so 00:24:04.352 CC lib/env_dpdk/pci_vmd.o 00:24:04.352 CC lib/env_dpdk/pci_idxd.o 00:24:04.352 CC lib/json/json_write.o 00:24:04.352 CC lib/env_dpdk/pci_event.o 00:24:04.352 CC lib/env_dpdk/sigbus_handler.o 00:24:04.352 CC lib/env_dpdk/pci_dpdk.o 00:24:04.352 CC lib/env_dpdk/pci_dpdk_2207.o 00:24:04.352 CC lib/env_dpdk/pci_dpdk_2211.o 00:24:04.352 CC lib/idxd/idxd_user.o 00:24:04.352 CC lib/idxd/idxd_kernel.o 00:24:04.352 LIB libspdk_json.a 00:24:04.352 CC lib/rdma_provider/common.o 00:24:04.352 CC lib/rdma_provider/rdma_provider_verbs.o 00:24:04.352 SO libspdk_json.so.6.0 00:24:04.352 LIB libspdk_vmd.a 00:24:04.352 SO libspdk_vmd.so.6.0 00:24:04.352 SYMLINK libspdk_json.so 00:24:04.352 SYMLINK libspdk_vmd.so 00:24:04.352 LIB libspdk_idxd.a 00:24:04.352 LIB libspdk_rdma_provider.a 00:24:04.352 SO libspdk_idxd.so.12.1 00:24:04.352 SO libspdk_rdma_provider.so.7.0 00:24:04.352 SYMLINK libspdk_idxd.so 00:24:04.352 SYMLINK libspdk_rdma_provider.so 00:24:04.352 CC lib/jsonrpc/jsonrpc_server.o 00:24:04.352 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:24:04.352 CC lib/jsonrpc/jsonrpc_client.o 00:24:04.352 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:24:04.352 LIB libspdk_jsonrpc.a 00:24:04.352 SO libspdk_jsonrpc.so.6.0 00:24:04.352 SYMLINK libspdk_jsonrpc.so 00:24:04.352 LIB libspdk_env_dpdk.a 00:24:04.352 SO libspdk_env_dpdk.so.15.1 00:24:04.352 CC lib/rpc/rpc.o 00:24:04.352 SYMLINK libspdk_env_dpdk.so 00:24:04.352 LIB libspdk_rpc.a 00:24:04.352 SO libspdk_rpc.so.6.0 00:24:04.352 SYMLINK libspdk_rpc.so 00:24:04.612 CC lib/keyring/keyring.o 00:24:04.612 CC lib/keyring/keyring_rpc.o 00:24:04.612 CC lib/notify/notify.o 00:24:04.612 CC lib/notify/notify_rpc.o 00:24:04.612 CC lib/trace/trace_flags.o 00:24:04.612 CC lib/trace/trace.o 00:24:04.612 CC lib/trace/trace_rpc.o 00:24:04.872 LIB libspdk_notify.a 00:24:04.872 SO libspdk_notify.so.6.0 00:24:04.872 LIB libspdk_keyring.a 00:24:04.872 SYMLINK libspdk_notify.so 00:24:04.872 LIB libspdk_trace.a 00:24:04.872 SO libspdk_keyring.so.2.0 00:24:05.190 SO libspdk_trace.so.11.0 00:24:05.190 SYMLINK libspdk_keyring.so 00:24:05.190 SYMLINK libspdk_trace.so 00:24:05.449 CC lib/sock/sock.o 00:24:05.449 CC lib/sock/sock_rpc.o 00:24:05.449 CC lib/thread/thread.o 00:24:05.449 CC lib/thread/iobuf.o 00:24:06.018 LIB libspdk_sock.a 00:24:06.018 SO libspdk_sock.so.10.0 00:24:06.018 SYMLINK libspdk_sock.so 00:24:06.277 CC lib/nvme/nvme_ctrlr_cmd.o 00:24:06.277 CC lib/nvme/nvme_ctrlr.o 00:24:06.277 CC lib/nvme/nvme_pcie.o 00:24:06.277 CC lib/nvme/nvme_ns_cmd.o 00:24:06.277 CC lib/nvme/nvme_fabric.o 00:24:06.277 CC lib/nvme/nvme_ns.o 00:24:06.277 CC lib/nvme/nvme_qpair.o 00:24:06.277 CC lib/nvme/nvme_pcie_common.o 00:24:06.277 CC lib/nvme/nvme.o 00:24:07.215 CC lib/nvme/nvme_quirks.o 00:24:07.215 CC lib/nvme/nvme_transport.o 00:24:07.215 LIB libspdk_thread.a 00:24:07.215 CC lib/nvme/nvme_discovery.o 00:24:07.215 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:24:07.215 SO libspdk_thread.so.11.0 00:24:07.474 SYMLINK libspdk_thread.so 00:24:07.474 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:24:07.474 CC lib/nvme/nvme_tcp.o 00:24:07.474 CC lib/nvme/nvme_opal.o 00:24:07.474 CC lib/nvme/nvme_io_msg.o 00:24:07.734 CC lib/nvme/nvme_poll_group.o 00:24:07.734 CC lib/accel/accel.o 00:24:07.993 CC lib/blob/blobstore.o 00:24:07.993 CC lib/init/json_config.o 00:24:07.993 CC lib/blob/request.o 00:24:07.993 CC lib/virtio/virtio.o 00:24:08.253 CC lib/blob/zeroes.o 00:24:08.253 CC lib/fsdev/fsdev.o 00:24:08.253 CC lib/init/subsystem.o 00:24:08.253 CC lib/init/subsystem_rpc.o 00:24:08.542 CC lib/nvme/nvme_zns.o 00:24:08.542 CC lib/accel/accel_rpc.o 00:24:08.542 CC lib/virtio/virtio_vhost_user.o 00:24:08.542 CC lib/accel/accel_sw.o 00:24:08.542 CC lib/init/rpc.o 00:24:08.801 CC lib/fsdev/fsdev_io.o 00:24:08.801 LIB libspdk_init.a 00:24:08.801 SO libspdk_init.so.6.0 00:24:08.801 SYMLINK libspdk_init.so 00:24:08.801 CC lib/fsdev/fsdev_rpc.o 00:24:09.060 CC lib/virtio/virtio_vfio_user.o 00:24:09.060 CC lib/nvme/nvme_stubs.o 00:24:09.060 CC lib/nvme/nvme_auth.o 00:24:09.060 CC lib/nvme/nvme_cuse.o 00:24:09.060 CC lib/nvme/nvme_rdma.o 00:24:09.060 CC lib/event/app.o 00:24:09.060 LIB libspdk_fsdev.a 00:24:09.060 SO libspdk_fsdev.so.2.0 00:24:09.320 CC lib/virtio/virtio_pci.o 00:24:09.320 SYMLINK libspdk_fsdev.so 00:24:09.320 CC lib/blob/blob_bs_dev.o 00:24:09.320 LIB libspdk_accel.a 00:24:09.320 SO libspdk_accel.so.16.0 00:24:09.320 SYMLINK libspdk_accel.so 00:24:09.321 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:24:09.580 CC lib/event/reactor.o 00:24:09.580 LIB libspdk_virtio.a 00:24:09.580 CC lib/bdev/bdev.o 00:24:09.580 SO libspdk_virtio.so.7.0 00:24:09.580 CC lib/event/log_rpc.o 00:24:09.580 CC lib/event/app_rpc.o 00:24:09.580 SYMLINK libspdk_virtio.so 00:24:09.580 CC lib/bdev/bdev_rpc.o 00:24:09.840 CC lib/event/scheduler_static.o 00:24:09.840 CC lib/bdev/bdev_zone.o 00:24:09.840 CC lib/bdev/part.o 00:24:10.100 LIB libspdk_event.a 00:24:10.100 CC lib/bdev/scsi_nvme.o 00:24:10.100 SO libspdk_event.so.14.0 00:24:10.100 SYMLINK libspdk_event.so 00:24:10.100 LIB libspdk_fuse_dispatcher.a 00:24:10.358 SO libspdk_fuse_dispatcher.so.1.0 00:24:10.358 SYMLINK libspdk_fuse_dispatcher.so 00:24:10.617 LIB libspdk_nvme.a 00:24:10.877 SO libspdk_nvme.so.15.0 00:24:11.136 SYMLINK libspdk_nvme.so 00:24:12.075 LIB libspdk_blob.a 00:24:12.075 SO libspdk_blob.so.11.0 00:24:12.334 SYMLINK libspdk_blob.so 00:24:12.594 CC lib/blobfs/blobfs.o 00:24:12.594 CC lib/blobfs/tree.o 00:24:12.594 CC lib/lvol/lvol.o 00:24:12.594 LIB libspdk_bdev.a 00:24:12.854 SO libspdk_bdev.so.17.0 00:24:12.854 SYMLINK libspdk_bdev.so 00:24:13.114 CC lib/ublk/ublk.o 00:24:13.114 CC lib/ublk/ublk_rpc.o 00:24:13.114 CC lib/scsi/dev.o 00:24:13.114 CC lib/scsi/lun.o 00:24:13.114 CC lib/scsi/port.o 00:24:13.114 CC lib/nvmf/ctrlr.o 00:24:13.114 CC lib/nbd/nbd.o 00:24:13.114 CC lib/ftl/ftl_core.o 00:24:13.373 CC lib/scsi/scsi.o 00:24:13.373 CC lib/scsi/scsi_bdev.o 00:24:13.373 CC lib/nbd/nbd_rpc.o 00:24:13.633 CC lib/nvmf/ctrlr_discovery.o 00:24:13.633 CC lib/scsi/scsi_pr.o 00:24:13.633 LIB libspdk_blobfs.a 00:24:13.633 CC lib/ftl/ftl_init.o 00:24:13.633 SO libspdk_blobfs.so.10.0 00:24:13.633 LIB libspdk_nbd.a 00:24:13.633 CC lib/ftl/ftl_layout.o 00:24:13.633 SO libspdk_nbd.so.7.0 00:24:13.633 SYMLINK libspdk_blobfs.so 00:24:13.633 CC lib/ftl/ftl_debug.o 00:24:13.633 LIB libspdk_lvol.a 00:24:13.893 SYMLINK libspdk_nbd.so 00:24:13.893 CC lib/ftl/ftl_io.o 00:24:13.893 SO libspdk_lvol.so.10.0 00:24:13.893 CC lib/ftl/ftl_sb.o 00:24:13.893 SYMLINK libspdk_lvol.so 00:24:13.893 CC lib/nvmf/ctrlr_bdev.o 00:24:13.893 CC lib/nvmf/subsystem.o 00:24:13.893 LIB libspdk_ublk.a 00:24:13.893 SO libspdk_ublk.so.3.0 00:24:13.893 CC lib/scsi/scsi_rpc.o 00:24:13.893 CC lib/ftl/ftl_l2p.o 00:24:14.154 SYMLINK libspdk_ublk.so 00:24:14.154 CC lib/ftl/ftl_l2p_flat.o 00:24:14.154 CC lib/ftl/ftl_nv_cache.o 00:24:14.154 CC lib/ftl/ftl_band.o 00:24:14.154 CC lib/ftl/ftl_band_ops.o 00:24:14.154 CC lib/nvmf/nvmf.o 00:24:14.154 CC lib/scsi/task.o 00:24:14.154 CC lib/ftl/ftl_writer.o 00:24:14.413 CC lib/ftl/ftl_rq.o 00:24:14.413 LIB libspdk_scsi.a 00:24:14.413 SO libspdk_scsi.so.9.0 00:24:14.413 CC lib/ftl/ftl_reloc.o 00:24:14.413 CC lib/ftl/ftl_l2p_cache.o 00:24:14.413 CC lib/ftl/ftl_p2l.o 00:24:14.673 CC lib/ftl/ftl_p2l_log.o 00:24:14.673 SYMLINK libspdk_scsi.so 00:24:14.673 CC lib/nvmf/nvmf_rpc.o 00:24:14.933 CC lib/nvmf/transport.o 00:24:14.933 CC lib/ftl/mngt/ftl_mngt.o 00:24:15.192 CC lib/iscsi/conn.o 00:24:15.192 CC lib/iscsi/init_grp.o 00:24:15.192 CC lib/vhost/vhost.o 00:24:15.192 CC lib/vhost/vhost_rpc.o 00:24:15.192 CC lib/vhost/vhost_scsi.o 00:24:15.192 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:24:15.463 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:24:15.463 CC lib/ftl/mngt/ftl_mngt_startup.o 00:24:15.463 CC lib/nvmf/tcp.o 00:24:15.722 CC lib/nvmf/stubs.o 00:24:15.722 CC lib/nvmf/mdns_server.o 00:24:15.722 CC lib/nvmf/rdma.o 00:24:15.722 CC lib/ftl/mngt/ftl_mngt_md.o 00:24:15.722 CC lib/ftl/mngt/ftl_mngt_misc.o 00:24:15.981 CC lib/iscsi/iscsi.o 00:24:15.981 CC lib/iscsi/param.o 00:24:15.981 CC lib/nvmf/auth.o 00:24:16.239 CC lib/iscsi/portal_grp.o 00:24:16.239 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:24:16.239 CC lib/iscsi/tgt_node.o 00:24:16.239 CC lib/vhost/vhost_blk.o 00:24:16.239 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:24:16.239 CC lib/ftl/mngt/ftl_mngt_band.o 00:24:16.498 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:24:16.498 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:24:16.498 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:24:16.498 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:24:16.498 CC lib/iscsi/iscsi_subsystem.o 00:24:16.757 CC lib/ftl/utils/ftl_conf.o 00:24:16.757 CC lib/iscsi/iscsi_rpc.o 00:24:16.757 CC lib/iscsi/task.o 00:24:17.015 CC lib/ftl/utils/ftl_md.o 00:24:17.015 CC lib/ftl/utils/ftl_mempool.o 00:24:17.015 CC lib/ftl/utils/ftl_bitmap.o 00:24:17.015 CC lib/vhost/rte_vhost_user.o 00:24:17.015 CC lib/ftl/utils/ftl_property.o 00:24:17.015 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:24:17.274 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:24:17.274 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:24:17.274 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:24:17.274 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:24:17.274 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:24:17.533 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:24:17.533 CC lib/ftl/upgrade/ftl_sb_v3.o 00:24:17.533 CC lib/ftl/upgrade/ftl_sb_v5.o 00:24:17.533 CC lib/ftl/nvc/ftl_nvc_dev.o 00:24:17.533 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:24:17.533 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:24:17.533 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:24:17.533 LIB libspdk_iscsi.a 00:24:17.791 CC lib/ftl/base/ftl_base_dev.o 00:24:17.791 CC lib/ftl/base/ftl_base_bdev.o 00:24:17.791 CC lib/ftl/ftl_trace.o 00:24:17.791 SO libspdk_iscsi.so.8.0 00:24:18.051 SYMLINK libspdk_iscsi.so 00:24:18.051 LIB libspdk_ftl.a 00:24:18.051 LIB libspdk_vhost.a 00:24:18.309 SO libspdk_ftl.so.9.0 00:24:18.309 SO libspdk_vhost.so.8.0 00:24:18.309 SYMLINK libspdk_vhost.so 00:24:18.568 LIB libspdk_nvmf.a 00:24:18.568 SYMLINK libspdk_ftl.so 00:24:18.568 SO libspdk_nvmf.so.20.0 00:24:18.827 SYMLINK libspdk_nvmf.so 00:24:19.399 CC module/env_dpdk/env_dpdk_rpc.o 00:24:19.399 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:24:19.399 CC module/fsdev/aio/fsdev_aio.o 00:24:19.399 CC module/sock/posix/posix.o 00:24:19.399 CC module/scheduler/dynamic/scheduler_dynamic.o 00:24:19.399 CC module/keyring/file/keyring.o 00:24:19.399 CC module/accel/error/accel_error.o 00:24:19.399 CC module/scheduler/gscheduler/gscheduler.o 00:24:19.399 CC module/blob/bdev/blob_bdev.o 00:24:19.399 CC module/accel/ioat/accel_ioat.o 00:24:19.399 LIB libspdk_env_dpdk_rpc.a 00:24:19.399 SO libspdk_env_dpdk_rpc.so.6.0 00:24:19.399 SYMLINK libspdk_env_dpdk_rpc.so 00:24:19.399 CC module/accel/ioat/accel_ioat_rpc.o 00:24:19.399 CC module/keyring/file/keyring_rpc.o 00:24:19.399 LIB libspdk_scheduler_gscheduler.a 00:24:19.399 LIB libspdk_scheduler_dpdk_governor.a 00:24:19.657 SO libspdk_scheduler_gscheduler.so.4.0 00:24:19.657 SO libspdk_scheduler_dpdk_governor.so.4.0 00:24:19.657 CC module/accel/error/accel_error_rpc.o 00:24:19.657 LIB libspdk_scheduler_dynamic.a 00:24:19.657 SYMLINK libspdk_scheduler_dpdk_governor.so 00:24:19.657 SYMLINK libspdk_scheduler_gscheduler.so 00:24:19.657 CC module/fsdev/aio/fsdev_aio_rpc.o 00:24:19.657 CC module/fsdev/aio/linux_aio_mgr.o 00:24:19.657 SO libspdk_scheduler_dynamic.so.4.0 00:24:19.657 LIB libspdk_accel_ioat.a 00:24:19.657 LIB libspdk_keyring_file.a 00:24:19.657 SO libspdk_accel_ioat.so.6.0 00:24:19.657 LIB libspdk_blob_bdev.a 00:24:19.657 SO libspdk_keyring_file.so.2.0 00:24:19.657 SYMLINK libspdk_scheduler_dynamic.so 00:24:19.657 SO libspdk_blob_bdev.so.11.0 00:24:19.657 SYMLINK libspdk_accel_ioat.so 00:24:19.657 LIB libspdk_accel_error.a 00:24:19.657 SYMLINK libspdk_keyring_file.so 00:24:19.657 SO libspdk_accel_error.so.2.0 00:24:19.918 SYMLINK libspdk_blob_bdev.so 00:24:19.918 CC module/keyring/linux/keyring.o 00:24:19.918 CC module/keyring/linux/keyring_rpc.o 00:24:19.918 SYMLINK libspdk_accel_error.so 00:24:19.918 CC module/accel/dsa/accel_dsa.o 00:24:19.918 CC module/accel/dsa/accel_dsa_rpc.o 00:24:19.918 CC module/accel/iaa/accel_iaa.o 00:24:19.918 LIB libspdk_keyring_linux.a 00:24:19.918 SO libspdk_keyring_linux.so.1.0 00:24:20.176 CC module/accel/iaa/accel_iaa_rpc.o 00:24:20.176 CC module/blobfs/bdev/blobfs_bdev.o 00:24:20.176 SYMLINK libspdk_keyring_linux.so 00:24:20.176 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:24:20.176 CC module/bdev/error/vbdev_error.o 00:24:20.176 CC module/bdev/gpt/gpt.o 00:24:20.176 CC module/bdev/delay/vbdev_delay.o 00:24:20.176 CC module/bdev/error/vbdev_error_rpc.o 00:24:20.176 LIB libspdk_fsdev_aio.a 00:24:20.176 LIB libspdk_accel_iaa.a 00:24:20.176 SO libspdk_fsdev_aio.so.1.0 00:24:20.176 LIB libspdk_accel_dsa.a 00:24:20.176 SO libspdk_accel_iaa.so.3.0 00:24:20.176 LIB libspdk_blobfs_bdev.a 00:24:20.176 LIB libspdk_sock_posix.a 00:24:20.176 SO libspdk_accel_dsa.so.5.0 00:24:20.176 SO libspdk_blobfs_bdev.so.6.0 00:24:20.176 SO libspdk_sock_posix.so.6.0 00:24:20.176 SYMLINK libspdk_accel_iaa.so 00:24:20.435 SYMLINK libspdk_fsdev_aio.so 00:24:20.435 CC module/bdev/gpt/vbdev_gpt.o 00:24:20.435 CC module/bdev/delay/vbdev_delay_rpc.o 00:24:20.435 SYMLINK libspdk_accel_dsa.so 00:24:20.435 SYMLINK libspdk_blobfs_bdev.so 00:24:20.435 SYMLINK libspdk_sock_posix.so 00:24:20.435 LIB libspdk_bdev_error.a 00:24:20.435 CC module/bdev/lvol/vbdev_lvol.o 00:24:20.435 SO libspdk_bdev_error.so.6.0 00:24:20.435 CC module/bdev/malloc/bdev_malloc.o 00:24:20.435 SYMLINK libspdk_bdev_error.so 00:24:20.435 CC module/bdev/null/bdev_null.o 00:24:20.435 CC module/bdev/nvme/bdev_nvme.o 00:24:20.435 CC module/bdev/nvme/bdev_nvme_rpc.o 00:24:20.435 CC module/bdev/passthru/vbdev_passthru.o 00:24:20.435 CC module/bdev/nvme/nvme_rpc.o 00:24:20.435 CC module/bdev/raid/bdev_raid.o 00:24:20.435 LIB libspdk_bdev_delay.a 00:24:20.694 SO libspdk_bdev_delay.so.6.0 00:24:20.694 LIB libspdk_bdev_gpt.a 00:24:20.694 SYMLINK libspdk_bdev_delay.so 00:24:20.694 SO libspdk_bdev_gpt.so.6.0 00:24:20.694 CC module/bdev/malloc/bdev_malloc_rpc.o 00:24:20.694 SYMLINK libspdk_bdev_gpt.so 00:24:20.694 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:24:20.694 CC module/bdev/nvme/bdev_mdns_client.o 00:24:20.694 CC module/bdev/null/bdev_null_rpc.o 00:24:20.952 CC module/bdev/nvme/vbdev_opal.o 00:24:20.952 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:24:20.952 LIB libspdk_bdev_malloc.a 00:24:20.952 CC module/bdev/nvme/vbdev_opal_rpc.o 00:24:20.952 SO libspdk_bdev_malloc.so.6.0 00:24:20.952 LIB libspdk_bdev_null.a 00:24:20.952 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:24:20.952 SO libspdk_bdev_null.so.6.0 00:24:20.952 SYMLINK libspdk_bdev_malloc.so 00:24:20.952 LIB libspdk_bdev_passthru.a 00:24:20.952 SO libspdk_bdev_passthru.so.6.0 00:24:21.211 SYMLINK libspdk_bdev_null.so 00:24:21.211 CC module/bdev/raid/bdev_raid_rpc.o 00:24:21.211 SYMLINK libspdk_bdev_passthru.so 00:24:21.211 LIB libspdk_bdev_lvol.a 00:24:21.211 CC module/bdev/raid/bdev_raid_sb.o 00:24:21.211 CC module/bdev/raid/raid0.o 00:24:21.211 SO libspdk_bdev_lvol.so.6.0 00:24:21.211 CC module/bdev/split/vbdev_split.o 00:24:21.211 CC module/bdev/zone_block/vbdev_zone_block.o 00:24:21.211 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:24:21.211 SYMLINK libspdk_bdev_lvol.so 00:24:21.211 CC module/bdev/xnvme/bdev_xnvme.o 00:24:21.469 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:24:21.469 CC module/bdev/aio/bdev_aio.o 00:24:21.469 CC module/bdev/aio/bdev_aio_rpc.o 00:24:21.469 CC module/bdev/split/vbdev_split_rpc.o 00:24:21.469 CC module/bdev/raid/raid1.o 00:24:21.469 CC module/bdev/raid/concat.o 00:24:21.727 LIB libspdk_bdev_xnvme.a 00:24:21.727 LIB libspdk_bdev_zone_block.a 00:24:21.727 SO libspdk_bdev_xnvme.so.3.0 00:24:21.727 LIB libspdk_bdev_split.a 00:24:21.727 SO libspdk_bdev_zone_block.so.6.0 00:24:21.727 CC module/bdev/ftl/bdev_ftl.o 00:24:21.727 SO libspdk_bdev_split.so.6.0 00:24:21.727 SYMLINK libspdk_bdev_xnvme.so 00:24:21.727 SYMLINK libspdk_bdev_zone_block.so 00:24:21.727 CC module/bdev/ftl/bdev_ftl_rpc.o 00:24:21.727 SYMLINK libspdk_bdev_split.so 00:24:21.727 CC module/bdev/iscsi/bdev_iscsi.o 00:24:21.727 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:24:21.727 LIB libspdk_bdev_aio.a 00:24:21.986 SO libspdk_bdev_aio.so.6.0 00:24:21.986 LIB libspdk_bdev_raid.a 00:24:21.986 SYMLINK libspdk_bdev_aio.so 00:24:21.986 SO libspdk_bdev_raid.so.6.0 00:24:21.986 CC module/bdev/virtio/bdev_virtio_scsi.o 00:24:21.986 CC module/bdev/virtio/bdev_virtio_blk.o 00:24:21.986 CC module/bdev/virtio/bdev_virtio_rpc.o 00:24:21.986 LIB libspdk_bdev_ftl.a 00:24:21.986 SYMLINK libspdk_bdev_raid.so 00:24:22.245 SO libspdk_bdev_ftl.so.6.0 00:24:22.245 SYMLINK libspdk_bdev_ftl.so 00:24:22.245 LIB libspdk_bdev_iscsi.a 00:24:22.245 SO libspdk_bdev_iscsi.so.6.0 00:24:22.245 SYMLINK libspdk_bdev_iscsi.so 00:24:22.505 LIB libspdk_bdev_virtio.a 00:24:22.764 SO libspdk_bdev_virtio.so.6.0 00:24:22.764 SYMLINK libspdk_bdev_virtio.so 00:24:23.703 LIB libspdk_bdev_nvme.a 00:24:23.703 SO libspdk_bdev_nvme.so.7.1 00:24:23.703 SYMLINK libspdk_bdev_nvme.so 00:24:24.272 CC module/event/subsystems/iobuf/iobuf.o 00:24:24.272 CC module/event/subsystems/scheduler/scheduler.o 00:24:24.272 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:24:24.272 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:24:24.534 CC module/event/subsystems/keyring/keyring.o 00:24:24.534 CC module/event/subsystems/vmd/vmd.o 00:24:24.534 CC module/event/subsystems/vmd/vmd_rpc.o 00:24:24.534 CC module/event/subsystems/sock/sock.o 00:24:24.534 CC module/event/subsystems/fsdev/fsdev.o 00:24:24.534 LIB libspdk_event_vmd.a 00:24:24.534 LIB libspdk_event_vhost_blk.a 00:24:24.534 LIB libspdk_event_scheduler.a 00:24:24.534 LIB libspdk_event_keyring.a 00:24:24.534 LIB libspdk_event_sock.a 00:24:24.534 LIB libspdk_event_iobuf.a 00:24:24.534 SO libspdk_event_vmd.so.6.0 00:24:24.535 SO libspdk_event_vhost_blk.so.3.0 00:24:24.535 SO libspdk_event_scheduler.so.4.0 00:24:24.535 LIB libspdk_event_fsdev.a 00:24:24.535 SO libspdk_event_keyring.so.1.0 00:24:24.535 SO libspdk_event_sock.so.5.0 00:24:24.535 SO libspdk_event_iobuf.so.3.0 00:24:24.535 SO libspdk_event_fsdev.so.1.0 00:24:24.535 SYMLINK libspdk_event_vhost_blk.so 00:24:24.535 SYMLINK libspdk_event_vmd.so 00:24:24.535 SYMLINK libspdk_event_scheduler.so 00:24:24.535 SYMLINK libspdk_event_keyring.so 00:24:24.535 SYMLINK libspdk_event_sock.so 00:24:24.535 SYMLINK libspdk_event_fsdev.so 00:24:24.535 SYMLINK libspdk_event_iobuf.so 00:24:25.103 CC module/event/subsystems/accel/accel.o 00:24:25.103 LIB libspdk_event_accel.a 00:24:25.362 SO libspdk_event_accel.so.6.0 00:24:25.362 SYMLINK libspdk_event_accel.so 00:24:25.622 CC module/event/subsystems/bdev/bdev.o 00:24:25.948 LIB libspdk_event_bdev.a 00:24:25.948 SO libspdk_event_bdev.so.6.0 00:24:25.948 SYMLINK libspdk_event_bdev.so 00:24:26.207 CC module/event/subsystems/nbd/nbd.o 00:24:26.207 CC module/event/subsystems/scsi/scsi.o 00:24:26.466 CC module/event/subsystems/ublk/ublk.o 00:24:26.466 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:24:26.466 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:24:26.466 LIB libspdk_event_nbd.a 00:24:26.466 LIB libspdk_event_scsi.a 00:24:26.466 LIB libspdk_event_ublk.a 00:24:26.466 SO libspdk_event_nbd.so.6.0 00:24:26.466 SO libspdk_event_scsi.so.6.0 00:24:26.466 SO libspdk_event_ublk.so.3.0 00:24:26.466 SYMLINK libspdk_event_nbd.so 00:24:26.466 SYMLINK libspdk_event_scsi.so 00:24:26.724 SYMLINK libspdk_event_ublk.so 00:24:26.724 LIB libspdk_event_nvmf.a 00:24:26.724 SO libspdk_event_nvmf.so.6.0 00:24:26.724 SYMLINK libspdk_event_nvmf.so 00:24:26.990 CC module/event/subsystems/iscsi/iscsi.o 00:24:26.990 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:24:26.990 LIB libspdk_event_vhost_scsi.a 00:24:26.990 LIB libspdk_event_iscsi.a 00:24:26.990 SO libspdk_event_vhost_scsi.so.3.0 00:24:27.247 SO libspdk_event_iscsi.so.6.0 00:24:27.247 SYMLINK libspdk_event_vhost_scsi.so 00:24:27.247 SYMLINK libspdk_event_iscsi.so 00:24:27.504 SO libspdk.so.6.0 00:24:27.504 SYMLINK libspdk.so 00:24:27.763 CXX app/trace/trace.o 00:24:27.763 CC app/trace_record/trace_record.o 00:24:27.763 CC app/spdk_lspci/spdk_lspci.o 00:24:27.763 CC app/spdk_nvme_identify/identify.o 00:24:27.763 CC app/spdk_nvme_perf/perf.o 00:24:27.763 CC app/iscsi_tgt/iscsi_tgt.o 00:24:27.763 CC app/nvmf_tgt/nvmf_main.o 00:24:27.763 CC app/spdk_tgt/spdk_tgt.o 00:24:27.763 CC test/thread/poller_perf/poller_perf.o 00:24:27.763 CC examples/util/zipf/zipf.o 00:24:27.763 LINK spdk_lspci 00:24:28.021 LINK iscsi_tgt 00:24:28.021 LINK nvmf_tgt 00:24:28.021 LINK poller_perf 00:24:28.021 LINK zipf 00:24:28.021 LINK spdk_trace_record 00:24:28.021 LINK spdk_tgt 00:24:28.281 LINK spdk_trace 00:24:28.281 CC app/spdk_nvme_discover/discovery_aer.o 00:24:28.281 CC app/spdk_top/spdk_top.o 00:24:28.281 CC app/spdk_dd/spdk_dd.o 00:24:28.281 CC examples/ioat/perf/perf.o 00:24:28.281 CC test/dma/test_dma/test_dma.o 00:24:28.281 LINK spdk_nvme_discover 00:24:28.542 CC test/app/bdev_svc/bdev_svc.o 00:24:28.542 CC app/fio/nvme/fio_plugin.o 00:24:28.542 CC examples/ioat/verify/verify.o 00:24:28.802 LINK bdev_svc 00:24:28.802 LINK ioat_perf 00:24:28.802 TEST_HEADER include/spdk/accel.h 00:24:28.802 TEST_HEADER include/spdk/accel_module.h 00:24:28.802 TEST_HEADER include/spdk/assert.h 00:24:28.802 TEST_HEADER include/spdk/barrier.h 00:24:28.802 TEST_HEADER include/spdk/base64.h 00:24:28.802 TEST_HEADER include/spdk/bdev.h 00:24:28.802 TEST_HEADER include/spdk/bdev_module.h 00:24:28.802 TEST_HEADER include/spdk/bdev_zone.h 00:24:28.802 TEST_HEADER include/spdk/bit_array.h 00:24:28.802 TEST_HEADER include/spdk/bit_pool.h 00:24:28.802 TEST_HEADER include/spdk/blob_bdev.h 00:24:28.802 TEST_HEADER include/spdk/blobfs_bdev.h 00:24:28.802 TEST_HEADER include/spdk/blobfs.h 00:24:28.802 TEST_HEADER include/spdk/blob.h 00:24:28.802 TEST_HEADER include/spdk/conf.h 00:24:28.802 TEST_HEADER include/spdk/config.h 00:24:28.803 TEST_HEADER include/spdk/cpuset.h 00:24:28.803 TEST_HEADER include/spdk/crc16.h 00:24:28.803 TEST_HEADER include/spdk/crc32.h 00:24:28.803 TEST_HEADER include/spdk/crc64.h 00:24:28.803 LINK spdk_dd 00:24:28.803 TEST_HEADER include/spdk/dif.h 00:24:28.803 TEST_HEADER include/spdk/dma.h 00:24:28.803 TEST_HEADER include/spdk/endian.h 00:24:28.803 TEST_HEADER include/spdk/env_dpdk.h 00:24:28.803 TEST_HEADER include/spdk/env.h 00:24:28.803 TEST_HEADER include/spdk/event.h 00:24:28.803 TEST_HEADER include/spdk/fd_group.h 00:24:28.803 TEST_HEADER include/spdk/fd.h 00:24:28.803 TEST_HEADER include/spdk/file.h 00:24:28.803 TEST_HEADER include/spdk/fsdev.h 00:24:28.803 TEST_HEADER include/spdk/fsdev_module.h 00:24:28.803 TEST_HEADER include/spdk/ftl.h 00:24:28.803 TEST_HEADER include/spdk/fuse_dispatcher.h 00:24:28.803 TEST_HEADER include/spdk/gpt_spec.h 00:24:28.803 TEST_HEADER include/spdk/hexlify.h 00:24:28.803 LINK verify 00:24:28.803 TEST_HEADER include/spdk/histogram_data.h 00:24:28.803 TEST_HEADER include/spdk/idxd.h 00:24:28.803 TEST_HEADER include/spdk/idxd_spec.h 00:24:28.803 TEST_HEADER include/spdk/init.h 00:24:28.803 TEST_HEADER include/spdk/ioat.h 00:24:28.803 TEST_HEADER include/spdk/ioat_spec.h 00:24:28.803 LINK spdk_nvme_perf 00:24:28.803 TEST_HEADER include/spdk/iscsi_spec.h 00:24:28.803 TEST_HEADER include/spdk/json.h 00:24:28.803 TEST_HEADER include/spdk/jsonrpc.h 00:24:28.803 TEST_HEADER include/spdk/keyring.h 00:24:28.803 TEST_HEADER include/spdk/keyring_module.h 00:24:28.803 TEST_HEADER include/spdk/likely.h 00:24:28.803 TEST_HEADER include/spdk/log.h 00:24:28.803 TEST_HEADER include/spdk/lvol.h 00:24:28.803 TEST_HEADER include/spdk/md5.h 00:24:28.803 TEST_HEADER include/spdk/memory.h 00:24:28.803 TEST_HEADER include/spdk/mmio.h 00:24:28.803 LINK spdk_nvme_identify 00:24:28.803 TEST_HEADER include/spdk/nbd.h 00:24:28.803 TEST_HEADER include/spdk/net.h 00:24:28.803 TEST_HEADER include/spdk/notify.h 00:24:28.803 TEST_HEADER include/spdk/nvme.h 00:24:28.803 TEST_HEADER include/spdk/nvme_intel.h 00:24:28.803 TEST_HEADER include/spdk/nvme_ocssd.h 00:24:28.803 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:24:28.803 TEST_HEADER include/spdk/nvme_spec.h 00:24:28.803 TEST_HEADER include/spdk/nvme_zns.h 00:24:28.803 TEST_HEADER include/spdk/nvmf_cmd.h 00:24:28.803 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:24:28.803 TEST_HEADER include/spdk/nvmf.h 00:24:28.803 TEST_HEADER include/spdk/nvmf_spec.h 00:24:28.803 TEST_HEADER include/spdk/nvmf_transport.h 00:24:28.803 TEST_HEADER include/spdk/opal.h 00:24:28.803 TEST_HEADER include/spdk/opal_spec.h 00:24:28.803 TEST_HEADER include/spdk/pci_ids.h 00:24:28.803 TEST_HEADER include/spdk/pipe.h 00:24:28.803 TEST_HEADER include/spdk/queue.h 00:24:28.803 TEST_HEADER include/spdk/reduce.h 00:24:28.803 TEST_HEADER include/spdk/rpc.h 00:24:28.803 TEST_HEADER include/spdk/scheduler.h 00:24:28.803 TEST_HEADER include/spdk/scsi.h 00:24:28.803 TEST_HEADER include/spdk/scsi_spec.h 00:24:28.803 TEST_HEADER include/spdk/sock.h 00:24:28.803 TEST_HEADER include/spdk/stdinc.h 00:24:28.803 TEST_HEADER include/spdk/string.h 00:24:28.803 TEST_HEADER include/spdk/thread.h 00:24:28.803 TEST_HEADER include/spdk/trace.h 00:24:28.803 TEST_HEADER include/spdk/trace_parser.h 00:24:28.803 TEST_HEADER include/spdk/tree.h 00:24:28.803 TEST_HEADER include/spdk/ublk.h 00:24:28.803 TEST_HEADER include/spdk/util.h 00:24:28.803 TEST_HEADER include/spdk/uuid.h 00:24:28.803 TEST_HEADER include/spdk/version.h 00:24:28.803 TEST_HEADER include/spdk/vfio_user_pci.h 00:24:28.803 TEST_HEADER include/spdk/vfio_user_spec.h 00:24:28.803 TEST_HEADER include/spdk/vhost.h 00:24:28.803 TEST_HEADER include/spdk/vmd.h 00:24:28.803 TEST_HEADER include/spdk/xor.h 00:24:28.803 TEST_HEADER include/spdk/zipf.h 00:24:29.068 CXX test/cpp_headers/accel.o 00:24:29.068 LINK test_dma 00:24:29.068 CC app/vhost/vhost.o 00:24:29.068 CXX test/cpp_headers/accel_module.o 00:24:29.068 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:29.068 CC examples/interrupt_tgt/interrupt_tgt.o 00:24:29.068 CC examples/vmd/lsvmd/lsvmd.o 00:24:29.068 CC examples/idxd/perf/perf.o 00:24:29.331 LINK vhost 00:24:29.331 LINK lsvmd 00:24:29.331 LINK spdk_nvme 00:24:29.331 CXX test/cpp_headers/assert.o 00:24:29.331 CC examples/thread/thread/thread_ex.o 00:24:29.331 LINK interrupt_tgt 00:24:29.331 CC examples/sock/hello_world/hello_sock.o 00:24:29.591 LINK spdk_top 00:24:29.591 CXX test/cpp_headers/barrier.o 00:24:29.591 CC app/fio/bdev/fio_plugin.o 00:24:29.591 CC examples/vmd/led/led.o 00:24:29.591 LINK idxd_perf 00:24:29.591 LINK thread 00:24:29.591 LINK nvme_fuzz 00:24:29.591 CXX test/cpp_headers/base64.o 00:24:29.591 LINK led 00:24:29.591 CC test/env/mem_callbacks/mem_callbacks.o 00:24:29.851 LINK hello_sock 00:24:29.851 CC test/event/reactor/reactor.o 00:24:29.851 CC test/event/event_perf/event_perf.o 00:24:29.851 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:29.851 CXX test/cpp_headers/bdev.o 00:24:29.851 LINK reactor 00:24:29.851 CC test/rpc_client/rpc_client_test.o 00:24:29.851 LINK event_perf 00:24:29.851 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:30.111 CC test/nvme/aer/aer.o 00:24:30.111 CXX test/cpp_headers/bdev_module.o 00:24:30.111 LINK spdk_bdev 00:24:30.111 CC examples/accel/perf/accel_perf.o 00:24:30.111 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:30.111 LINK rpc_client_test 00:24:30.111 CC test/event/reactor_perf/reactor_perf.o 00:24:30.111 CC test/event/app_repeat/app_repeat.o 00:24:30.370 CXX test/cpp_headers/bdev_zone.o 00:24:30.370 LINK mem_callbacks 00:24:30.370 CC test/app/histogram_perf/histogram_perf.o 00:24:30.371 LINK reactor_perf 00:24:30.371 LINK aer 00:24:30.371 LINK app_repeat 00:24:30.371 CC test/app/jsoncat/jsoncat.o 00:24:30.371 CXX test/cpp_headers/bit_array.o 00:24:30.371 LINK histogram_perf 00:24:30.629 CC test/env/vtophys/vtophys.o 00:24:30.629 CXX test/cpp_headers/bit_pool.o 00:24:30.629 LINK jsoncat 00:24:30.629 LINK vhost_fuzz 00:24:30.629 CC test/nvme/reset/reset.o 00:24:30.629 LINK vtophys 00:24:30.629 CXX test/cpp_headers/blob_bdev.o 00:24:30.629 LINK accel_perf 00:24:30.629 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:24:30.629 CC test/nvme/sgl/sgl.o 00:24:30.629 CC test/event/scheduler/scheduler.o 00:24:30.888 CC test/nvme/e2edp/nvme_dp.o 00:24:30.888 CXX test/cpp_headers/blobfs_bdev.o 00:24:30.888 LINK env_dpdk_post_init 00:24:30.888 CXX test/cpp_headers/blobfs.o 00:24:30.888 LINK reset 00:24:30.888 LINK scheduler 00:24:31.146 CC test/accel/dif/dif.o 00:24:31.146 LINK sgl 00:24:31.146 LINK nvme_dp 00:24:31.147 CXX test/cpp_headers/blob.o 00:24:31.147 CC examples/blob/hello_world/hello_blob.o 00:24:31.147 CC examples/blob/cli/blobcli.o 00:24:31.147 CC test/env/memory/memory_ut.o 00:24:31.147 CXX test/cpp_headers/conf.o 00:24:31.406 CC test/env/pci/pci_ut.o 00:24:31.406 LINK hello_blob 00:24:31.406 CC test/nvme/overhead/overhead.o 00:24:31.406 CXX test/cpp_headers/config.o 00:24:31.406 CC examples/nvme/hello_world/hello_world.o 00:24:31.406 CC examples/fsdev/hello_world/hello_fsdev.o 00:24:31.406 CXX test/cpp_headers/cpuset.o 00:24:31.665 CXX test/cpp_headers/crc16.o 00:24:31.665 LINK hello_world 00:24:31.665 LINK dif 00:24:31.665 LINK overhead 00:24:31.665 LINK blobcli 00:24:31.665 LINK pci_ut 00:24:31.925 LINK hello_fsdev 00:24:31.925 CC test/blobfs/mkfs/mkfs.o 00:24:31.925 CXX test/cpp_headers/crc32.o 00:24:31.925 LINK iscsi_fuzz 00:24:31.925 CXX test/cpp_headers/crc64.o 00:24:31.925 CC examples/nvme/reconnect/reconnect.o 00:24:31.925 LINK mkfs 00:24:32.184 CC test/nvme/err_injection/err_injection.o 00:24:32.184 CC test/nvme/startup/startup.o 00:24:32.184 CC test/app/stub/stub.o 00:24:32.184 CC test/nvme/reserve/reserve.o 00:24:32.184 CXX test/cpp_headers/dif.o 00:24:32.184 CC test/nvme/simple_copy/simple_copy.o 00:24:32.184 LINK err_injection 00:24:32.184 LINK startup 00:24:32.498 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:32.498 LINK stub 00:24:32.498 CXX test/cpp_headers/dma.o 00:24:32.498 LINK reserve 00:24:32.498 CC test/nvme/connect_stress/connect_stress.o 00:24:32.498 LINK memory_ut 00:24:32.498 LINK reconnect 00:24:32.498 LINK simple_copy 00:24:32.498 CXX test/cpp_headers/endian.o 00:24:32.498 CXX test/cpp_headers/env_dpdk.o 00:24:32.498 CC test/nvme/boot_partition/boot_partition.o 00:24:32.782 CC examples/nvme/arbitration/arbitration.o 00:24:32.782 LINK connect_stress 00:24:32.782 CC test/nvme/compliance/nvme_compliance.o 00:24:32.782 CC test/nvme/fused_ordering/fused_ordering.o 00:24:32.782 CC test/nvme/doorbell_aers/doorbell_aers.o 00:24:32.783 CXX test/cpp_headers/env.o 00:24:32.783 CC test/nvme/fdp/fdp.o 00:24:32.783 CC test/nvme/cuse/cuse.o 00:24:32.783 LINK boot_partition 00:24:32.783 CXX test/cpp_headers/event.o 00:24:33.041 LINK doorbell_aers 00:24:33.041 LINK fused_ordering 00:24:33.041 CC examples/nvme/hotplug/hotplug.o 00:24:33.041 LINK nvme_manage 00:24:33.041 LINK arbitration 00:24:33.041 CXX test/cpp_headers/fd_group.o 00:24:33.041 CC examples/nvme/cmb_copy/cmb_copy.o 00:24:33.041 LINK nvme_compliance 00:24:33.041 CC examples/nvme/abort/abort.o 00:24:33.041 CXX test/cpp_headers/fd.o 00:24:33.041 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:24:33.299 LINK hotplug 00:24:33.299 LINK fdp 00:24:33.299 CXX test/cpp_headers/file.o 00:24:33.299 LINK cmb_copy 00:24:33.299 CXX test/cpp_headers/fsdev.o 00:24:33.299 CXX test/cpp_headers/fsdev_module.o 00:24:33.299 LINK pmr_persistence 00:24:33.299 CXX test/cpp_headers/ftl.o 00:24:33.299 CXX test/cpp_headers/fuse_dispatcher.o 00:24:33.559 CXX test/cpp_headers/gpt_spec.o 00:24:33.559 CXX test/cpp_headers/hexlify.o 00:24:33.559 CC test/lvol/esnap/esnap.o 00:24:33.559 CXX test/cpp_headers/histogram_data.o 00:24:33.559 CXX test/cpp_headers/idxd.o 00:24:33.559 LINK abort 00:24:33.559 CXX test/cpp_headers/idxd_spec.o 00:24:33.559 CC examples/bdev/hello_world/hello_bdev.o 00:24:33.559 CXX test/cpp_headers/init.o 00:24:33.819 CXX test/cpp_headers/ioat.o 00:24:33.819 CC examples/bdev/bdevperf/bdevperf.o 00:24:33.819 CXX test/cpp_headers/ioat_spec.o 00:24:33.819 CXX test/cpp_headers/iscsi_spec.o 00:24:33.819 CC test/bdev/bdevio/bdevio.o 00:24:33.819 CXX test/cpp_headers/json.o 00:24:33.819 CXX test/cpp_headers/jsonrpc.o 00:24:33.819 LINK hello_bdev 00:24:33.819 CXX test/cpp_headers/keyring.o 00:24:33.819 CXX test/cpp_headers/keyring_module.o 00:24:33.819 CXX test/cpp_headers/likely.o 00:24:33.819 CXX test/cpp_headers/log.o 00:24:34.078 CXX test/cpp_headers/lvol.o 00:24:34.078 CXX test/cpp_headers/md5.o 00:24:34.078 CXX test/cpp_headers/memory.o 00:24:34.078 CXX test/cpp_headers/mmio.o 00:24:34.078 CXX test/cpp_headers/nbd.o 00:24:34.078 CXX test/cpp_headers/net.o 00:24:34.078 CXX test/cpp_headers/notify.o 00:24:34.078 CXX test/cpp_headers/nvme.o 00:24:34.337 CXX test/cpp_headers/nvme_intel.o 00:24:34.337 CXX test/cpp_headers/nvme_ocssd.o 00:24:34.337 LINK bdevio 00:24:34.337 CXX test/cpp_headers/nvme_ocssd_spec.o 00:24:34.337 CXX test/cpp_headers/nvme_spec.o 00:24:34.337 CXX test/cpp_headers/nvme_zns.o 00:24:34.337 LINK cuse 00:24:34.337 CXX test/cpp_headers/nvmf_cmd.o 00:24:34.337 CXX test/cpp_headers/nvmf_fc_spec.o 00:24:34.337 CXX test/cpp_headers/nvmf.o 00:24:34.596 CXX test/cpp_headers/nvmf_spec.o 00:24:34.596 CXX test/cpp_headers/nvmf_transport.o 00:24:34.596 CXX test/cpp_headers/opal.o 00:24:34.596 CXX test/cpp_headers/opal_spec.o 00:24:34.596 CXX test/cpp_headers/pci_ids.o 00:24:34.596 CXX test/cpp_headers/pipe.o 00:24:34.596 CXX test/cpp_headers/queue.o 00:24:34.596 CXX test/cpp_headers/reduce.o 00:24:34.596 LINK bdevperf 00:24:34.596 CXX test/cpp_headers/rpc.o 00:24:34.596 CXX test/cpp_headers/scheduler.o 00:24:34.596 CXX test/cpp_headers/scsi.o 00:24:34.596 CXX test/cpp_headers/scsi_spec.o 00:24:34.596 CXX test/cpp_headers/sock.o 00:24:34.596 CXX test/cpp_headers/stdinc.o 00:24:34.856 CXX test/cpp_headers/string.o 00:24:34.856 CXX test/cpp_headers/thread.o 00:24:34.856 CXX test/cpp_headers/trace.o 00:24:34.856 CXX test/cpp_headers/trace_parser.o 00:24:34.856 CXX test/cpp_headers/tree.o 00:24:34.856 CXX test/cpp_headers/ublk.o 00:24:34.856 CXX test/cpp_headers/util.o 00:24:34.856 CXX test/cpp_headers/uuid.o 00:24:34.856 CXX test/cpp_headers/version.o 00:24:34.856 CXX test/cpp_headers/vfio_user_pci.o 00:24:34.856 CXX test/cpp_headers/vfio_user_spec.o 00:24:34.856 CXX test/cpp_headers/vhost.o 00:24:34.856 CXX test/cpp_headers/vmd.o 00:24:35.115 CXX test/cpp_headers/xor.o 00:24:35.115 CXX test/cpp_headers/zipf.o 00:24:35.115 CC examples/nvmf/nvmf/nvmf.o 00:24:35.376 LINK nvmf 00:24:39.573 LINK esnap 00:24:40.141 00:24:40.141 real 1m38.494s 00:24:40.141 user 8m26.716s 00:24:40.141 sys 1m53.032s 00:24:40.141 ************************************ 00:24:40.141 END TEST make 00:24:40.141 ************************************ 00:24:40.141 05:35:59 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:24:40.141 05:35:59 make -- common/autotest_common.sh@10 -- $ set +x 00:24:40.141 05:35:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:24:40.141 05:35:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:40.141 05:35:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:40.141 05:35:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:40.141 05:35:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:40.141 05:35:59 -- pm/common@44 -- $ pid=5501 00:24:40.141 05:35:59 -- pm/common@50 -- $ kill -TERM 5501 00:24:40.141 05:35:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:40.142 05:35:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:40.142 05:35:59 -- pm/common@44 -- $ pid=5503 00:24:40.142 05:35:59 -- pm/common@50 -- $ kill -TERM 5503 00:24:40.142 05:35:59 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:24:40.142 05:35:59 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:24:40.430 05:36:00 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.430 05:36:00 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.430 05:36:00 -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.430 05:36:00 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.430 05:36:00 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.430 05:36:00 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.430 05:36:00 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.430 05:36:00 -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.430 05:36:00 -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.430 05:36:00 -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.430 05:36:00 -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.430 05:36:00 -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.430 05:36:00 -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.430 05:36:00 -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.430 05:36:00 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.430 05:36:00 -- scripts/common.sh@344 -- # case "$op" in 00:24:40.430 05:36:00 -- scripts/common.sh@345 -- # : 1 00:24:40.430 05:36:00 -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.430 05:36:00 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.430 05:36:00 -- scripts/common.sh@365 -- # decimal 1 00:24:40.430 05:36:00 -- scripts/common.sh@353 -- # local d=1 00:24:40.430 05:36:00 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.430 05:36:00 -- scripts/common.sh@355 -- # echo 1 00:24:40.430 05:36:00 -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.430 05:36:00 -- scripts/common.sh@366 -- # decimal 2 00:24:40.430 05:36:00 -- scripts/common.sh@353 -- # local d=2 00:24:40.430 05:36:00 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.430 05:36:00 -- scripts/common.sh@355 -- # echo 2 00:24:40.430 05:36:00 -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.430 05:36:00 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.430 05:36:00 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.430 05:36:00 -- scripts/common.sh@368 -- # return 0 00:24:40.430 05:36:00 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.430 05:36:00 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.430 --rc genhtml_branch_coverage=1 00:24:40.430 --rc genhtml_function_coverage=1 00:24:40.430 --rc genhtml_legend=1 00:24:40.430 --rc geninfo_all_blocks=1 00:24:40.430 --rc geninfo_unexecuted_blocks=1 00:24:40.430 00:24:40.430 ' 00:24:40.430 05:36:00 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.430 --rc genhtml_branch_coverage=1 00:24:40.430 --rc genhtml_function_coverage=1 00:24:40.431 --rc genhtml_legend=1 00:24:40.431 --rc geninfo_all_blocks=1 00:24:40.431 --rc geninfo_unexecuted_blocks=1 00:24:40.431 00:24:40.431 ' 00:24:40.431 05:36:00 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.431 --rc genhtml_branch_coverage=1 00:24:40.431 --rc genhtml_function_coverage=1 00:24:40.431 --rc genhtml_legend=1 00:24:40.431 --rc geninfo_all_blocks=1 00:24:40.431 --rc geninfo_unexecuted_blocks=1 00:24:40.431 00:24:40.431 ' 00:24:40.431 05:36:00 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.431 --rc genhtml_branch_coverage=1 00:24:40.431 --rc genhtml_function_coverage=1 00:24:40.431 --rc genhtml_legend=1 00:24:40.431 --rc geninfo_all_blocks=1 00:24:40.431 --rc geninfo_unexecuted_blocks=1 00:24:40.431 00:24:40.431 ' 00:24:40.431 05:36:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:40.431 05:36:00 -- nvmf/common.sh@7 -- # uname -s 00:24:40.431 05:36:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.431 05:36:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.431 05:36:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.431 05:36:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.431 05:36:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.431 05:36:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.431 05:36:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.431 05:36:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.431 05:36:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.431 05:36:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.431 05:36:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92143e44-19be-4cde-be32-130a9d4b1300 00:24:40.431 05:36:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=92143e44-19be-4cde-be32-130a9d4b1300 00:24:40.431 05:36:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.431 05:36:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.431 05:36:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:40.431 05:36:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.431 05:36:00 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.431 05:36:00 -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.431 05:36:00 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.431 05:36:00 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.431 05:36:00 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.431 05:36:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.431 05:36:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.431 05:36:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.431 05:36:00 -- paths/export.sh@5 -- # export PATH 00:24:40.431 05:36:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.431 05:36:00 -- nvmf/common.sh@51 -- # : 0 00:24:40.431 05:36:00 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.431 05:36:00 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.431 05:36:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.431 05:36:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.431 05:36:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.431 05:36:00 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.431 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.431 05:36:00 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.431 05:36:00 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.431 05:36:00 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.431 05:36:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:24:40.431 05:36:00 -- spdk/autotest.sh@32 -- # uname -s 00:24:40.431 05:36:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:24:40.431 05:36:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:24:40.431 05:36:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:40.431 05:36:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:24:40.431 05:36:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:24:40.431 05:36:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:24:40.431 05:36:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:24:40.431 05:36:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:24:40.431 05:36:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:24:40.431 05:36:00 -- spdk/autotest.sh@48 -- # udevadm_pid=55101 00:24:40.431 05:36:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:24:40.431 05:36:00 -- pm/common@17 -- # local monitor 00:24:40.431 05:36:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:40.431 05:36:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:24:40.431 05:36:00 -- pm/common@25 -- # sleep 1 00:24:40.431 05:36:00 -- pm/common@21 -- # date +%s 00:24:40.431 05:36:00 -- pm/common@21 -- # date +%s 00:24:40.431 05:36:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080960 00:24:40.431 05:36:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732080960 00:24:40.689 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080960_collect-vmstat.pm.log 00:24:40.689 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732080960_collect-cpu-load.pm.log 00:24:41.624 05:36:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:24:41.624 05:36:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:24:41.624 05:36:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.624 05:36:01 -- common/autotest_common.sh@10 -- # set +x 00:24:41.624 05:36:01 -- spdk/autotest.sh@59 -- # create_test_list 00:24:41.624 05:36:01 -- common/autotest_common.sh@750 -- # xtrace_disable 00:24:41.624 05:36:01 -- common/autotest_common.sh@10 -- # set +x 00:24:41.624 05:36:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:24:41.624 05:36:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:24:41.624 05:36:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:24:41.624 05:36:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:24:41.624 05:36:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:24:41.624 05:36:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:24:41.624 05:36:01 -- common/autotest_common.sh@1455 -- # uname 00:24:41.624 05:36:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:24:41.624 05:36:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:24:41.624 05:36:01 -- common/autotest_common.sh@1475 -- # uname 00:24:41.624 05:36:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:24:41.624 05:36:01 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:24:41.624 05:36:01 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:24:41.624 lcov: LCOV version 1.15 00:24:41.624 05:36:01 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:24:56.512 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:24:56.512 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:25:14.603 05:36:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:25:14.603 05:36:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.603 05:36:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.603 05:36:32 -- spdk/autotest.sh@78 -- # rm -f 00:25:14.603 05:36:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:14.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.603 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:14.603 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:14.603 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:14.603 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:14.603 05:36:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:25:14.603 05:36:33 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:25:14.603 05:36:33 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:25:14.603 05:36:33 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2c2n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme2c2n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:25:14.603 05:36:33 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:25:14.603 05:36:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:25:14.603 05:36:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.603 05:36:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:25:14.603 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.603 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.603 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:25:14.603 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:25:14.603 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:25:14.603 No valid GPT data, bailing 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # pt= 00:25:14.603 05:36:33 -- scripts/common.sh@395 -- # return 1 00:25:14.603 05:36:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:25:14.603 1+0 records in 00:25:14.603 1+0 records out 00:25:14.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159701 s, 65.7 MB/s 00:25:14.603 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.603 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.603 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:25:14.603 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:25:14.603 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:25:14.603 No valid GPT data, bailing 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # pt= 00:25:14.603 05:36:33 -- scripts/common.sh@395 -- # return 1 00:25:14.603 05:36:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:25:14.603 1+0 records in 00:25:14.603 1+0 records out 00:25:14.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00811773 s, 129 MB/s 00:25:14.603 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.603 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.603 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:25:14.603 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:25:14.603 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:25:14.603 No valid GPT data, bailing 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # pt= 00:25:14.603 05:36:33 -- scripts/common.sh@395 -- # return 1 00:25:14.603 05:36:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:25:14.603 1+0 records in 00:25:14.603 1+0 records out 00:25:14.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517873 s, 202 MB/s 00:25:14.603 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.603 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.603 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:25:14.603 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:25:14.603 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:25:14.603 No valid GPT data, bailing 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:25:14.603 05:36:33 -- scripts/common.sh@394 -- # pt= 00:25:14.603 05:36:33 -- scripts/common.sh@395 -- # return 1 00:25:14.603 05:36:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:25:14.603 1+0 records in 00:25:14.603 1+0 records out 00:25:14.603 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699863 s, 150 MB/s 00:25:14.603 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.604 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.604 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:25:14.604 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:25:14.604 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:25:14.604 No valid GPT data, bailing 00:25:14.604 05:36:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:25:14.604 05:36:33 -- scripts/common.sh@394 -- # pt= 00:25:14.604 05:36:33 -- scripts/common.sh@395 -- # return 1 00:25:14.604 05:36:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:25:14.604 1+0 records in 00:25:14.604 1+0 records out 00:25:14.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679193 s, 154 MB/s 00:25:14.604 05:36:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:25:14.604 05:36:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:25:14.604 05:36:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:25:14.604 05:36:33 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:25:14.604 05:36:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:25:14.604 No valid GPT data, bailing 00:25:14.604 05:36:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:25:14.604 05:36:34 -- scripts/common.sh@394 -- # pt= 00:25:14.604 05:36:34 -- scripts/common.sh@395 -- # return 1 00:25:14.604 05:36:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:25:14.604 1+0 records in 00:25:14.604 1+0 records out 00:25:14.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704873 s, 149 MB/s 00:25:14.604 05:36:34 -- spdk/autotest.sh@105 -- # sync 00:25:14.604 05:36:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:25:14.604 05:36:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:25:14.604 05:36:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:25:17.137 05:36:36 -- spdk/autotest.sh@111 -- # uname -s 00:25:17.137 05:36:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:25:17.137 05:36:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:25:17.137 05:36:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:25:18.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:18.331 Hugepages 00:25:18.331 node hugesize free / total 00:25:18.331 node0 1048576kB 0 / 0 00:25:18.331 node0 2048kB 0 / 0 00:25:18.331 00:25:18.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:25:18.588 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:25:18.588 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:25:18.845 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:25:18.845 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:25:18.845 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:25:18.845 05:36:38 -- spdk/autotest.sh@117 -- # uname -s 00:25:18.845 05:36:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:25:18.845 05:36:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:25:18.845 05:36:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:19.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:20.344 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.344 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.602 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.602 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:20.602 05:36:40 -- common/autotest_common.sh@1515 -- # sleep 1 00:25:21.543 05:36:41 -- common/autotest_common.sh@1516 -- # bdfs=() 00:25:21.543 05:36:41 -- common/autotest_common.sh@1516 -- # local bdfs 00:25:21.543 05:36:41 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:25:21.543 05:36:41 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:25:21.543 05:36:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:25:21.543 05:36:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:25:21.543 05:36:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:21.543 05:36:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:21.543 05:36:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:25:21.802 05:36:41 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:25:21.802 05:36:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:21.802 05:36:41 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:22.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:22.629 Waiting for block devices as requested 00:25:22.629 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:22.629 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:22.629 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:22.888 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:28.185 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:28.185 05:36:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:28.185 05:36:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:25:28.185 05:36:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:25:28.185 05:36:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:28.185 05:36:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:28.185 05:36:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:25:28.185 05:36:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:25:28.185 05:36:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:25:28.185 05:36:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:25:28.185 05:36:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:25:28.185 05:36:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:28.185 05:36:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:25:28.185 05:36:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:28.186 05:36:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1541 -- # continue 00:25:28.186 05:36:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:28.186 05:36:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1541 -- # continue 00:25:28.186 05:36:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:28.186 05:36:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1541 -- # continue 00:25:28.186 05:36:47 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:25:28.186 05:36:47 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # grep oacs 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:25:28.186 05:36:47 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:25:28.186 05:36:47 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:25:28.186 05:36:47 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:25:28.186 05:36:47 -- common/autotest_common.sh@1541 -- # continue 00:25:28.186 05:36:47 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:25:28.186 05:36:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:28.186 05:36:47 -- common/autotest_common.sh@10 -- # set +x 00:25:28.186 05:36:47 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:25:28.186 05:36:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:28.186 05:36:47 -- common/autotest_common.sh@10 -- # set +x 00:25:28.186 05:36:47 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:28.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:29.324 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.324 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.324 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.583 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:29.583 05:36:49 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:25:29.583 05:36:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:29.583 05:36:49 -- common/autotest_common.sh@10 -- # set +x 00:25:29.583 05:36:49 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:25:29.583 05:36:49 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:25:29.583 05:36:49 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:25:29.583 05:36:49 -- common/autotest_common.sh@1561 -- # bdfs=() 00:25:29.583 05:36:49 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:25:29.583 05:36:49 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:25:29.583 05:36:49 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:25:29.584 05:36:49 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:25:29.584 05:36:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:25:29.584 05:36:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:25:29.584 05:36:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:29.584 05:36:49 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:29.584 05:36:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:25:29.843 05:36:49 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:25:29.843 05:36:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:29.843 05:36:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:29.843 05:36:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:29.843 05:36:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:29.843 05:36:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:29.843 05:36:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:29.843 05:36:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:29.843 05:36:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:25:29.843 05:36:49 -- common/autotest_common.sh@1564 -- # device=0x0010 00:25:29.843 05:36:49 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:25:29.843 05:36:49 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:25:29.843 05:36:49 -- common/autotest_common.sh@1570 -- # return 0 00:25:29.843 05:36:49 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:25:29.843 05:36:49 -- common/autotest_common.sh@1578 -- # return 0 00:25:29.843 05:36:49 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:25:29.843 05:36:49 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:25:29.843 05:36:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:29.843 05:36:49 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:25:29.843 05:36:49 -- spdk/autotest.sh@149 -- # timing_enter lib 00:25:29.843 05:36:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:29.843 05:36:49 -- common/autotest_common.sh@10 -- # set +x 00:25:29.843 05:36:49 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:25:29.843 05:36:49 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:29.843 05:36:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:29.843 05:36:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:29.843 05:36:49 -- common/autotest_common.sh@10 -- # set +x 00:25:29.843 ************************************ 00:25:29.843 START TEST env 00:25:29.843 ************************************ 00:25:29.843 05:36:49 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:25:29.843 * Looking for test storage... 00:25:30.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:25:30.119 05:36:49 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:30.119 05:36:49 env -- common/autotest_common.sh@1691 -- # lcov --version 00:25:30.119 05:36:49 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:30.119 05:36:49 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:30.119 05:36:49 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.119 05:36:49 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.119 05:36:49 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.120 05:36:49 env -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.120 05:36:49 env -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.120 05:36:49 env -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.120 05:36:49 env -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.120 05:36:49 env -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.120 05:36:49 env -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.120 05:36:49 env -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.120 05:36:49 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.120 05:36:49 env -- scripts/common.sh@344 -- # case "$op" in 00:25:30.120 05:36:49 env -- scripts/common.sh@345 -- # : 1 00:25:30.120 05:36:49 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.120 05:36:49 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.120 05:36:49 env -- scripts/common.sh@365 -- # decimal 1 00:25:30.120 05:36:49 env -- scripts/common.sh@353 -- # local d=1 00:25:30.120 05:36:49 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.120 05:36:49 env -- scripts/common.sh@355 -- # echo 1 00:25:30.120 05:36:49 env -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.120 05:36:49 env -- scripts/common.sh@366 -- # decimal 2 00:25:30.120 05:36:49 env -- scripts/common.sh@353 -- # local d=2 00:25:30.120 05:36:49 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.120 05:36:49 env -- scripts/common.sh@355 -- # echo 2 00:25:30.120 05:36:49 env -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.120 05:36:49 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.120 05:36:49 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.120 05:36:49 env -- scripts/common.sh@368 -- # return 0 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:30.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.120 --rc genhtml_branch_coverage=1 00:25:30.120 --rc genhtml_function_coverage=1 00:25:30.120 --rc genhtml_legend=1 00:25:30.120 --rc geninfo_all_blocks=1 00:25:30.120 --rc geninfo_unexecuted_blocks=1 00:25:30.120 00:25:30.120 ' 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:30.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.120 --rc genhtml_branch_coverage=1 00:25:30.120 --rc genhtml_function_coverage=1 00:25:30.120 --rc genhtml_legend=1 00:25:30.120 --rc geninfo_all_blocks=1 00:25:30.120 --rc geninfo_unexecuted_blocks=1 00:25:30.120 00:25:30.120 ' 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:30.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.120 --rc genhtml_branch_coverage=1 00:25:30.120 --rc genhtml_function_coverage=1 00:25:30.120 --rc genhtml_legend=1 00:25:30.120 --rc geninfo_all_blocks=1 00:25:30.120 --rc geninfo_unexecuted_blocks=1 00:25:30.120 00:25:30.120 ' 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:30.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.120 --rc genhtml_branch_coverage=1 00:25:30.120 --rc genhtml_function_coverage=1 00:25:30.120 --rc genhtml_legend=1 00:25:30.120 --rc geninfo_all_blocks=1 00:25:30.120 --rc geninfo_unexecuted_blocks=1 00:25:30.120 00:25:30.120 ' 00:25:30.120 05:36:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:30.120 05:36:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.120 05:36:49 env -- common/autotest_common.sh@10 -- # set +x 00:25:30.120 ************************************ 00:25:30.120 START TEST env_memory 00:25:30.120 ************************************ 00:25:30.120 05:36:49 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:25:30.120 00:25:30.120 00:25:30.120 CUnit - A unit testing framework for C - Version 2.1-3 00:25:30.120 http://cunit.sourceforge.net/ 00:25:30.120 00:25:30.120 00:25:30.120 Suite: memory 00:25:30.120 Test: alloc and free memory map ...[2024-11-20 05:36:49.941172] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:25:30.120 passed 00:25:30.120 Test: mem map translation ...[2024-11-20 05:36:50.016399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:25:30.120 [2024-11-20 05:36:50.016484] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:25:30.120 [2024-11-20 05:36:50.016554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:25:30.120 [2024-11-20 05:36:50.016572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:25:30.378 passed 00:25:30.378 Test: mem map registration ...[2024-11-20 05:36:50.118346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:25:30.378 [2024-11-20 05:36:50.118437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:25:30.378 passed 00:25:30.378 Test: mem map adjacent registrations ...passed 00:25:30.378 00:25:30.378 Run Summary: Type Total Ran Passed Failed Inactive 00:25:30.378 suites 1 1 n/a 0 0 00:25:30.378 tests 4 4 4 0 0 00:25:30.378 asserts 152 152 152 0 n/a 00:25:30.378 00:25:30.378 Elapsed time = 0.334 seconds 00:25:30.378 00:25:30.378 real 0m0.384s 00:25:30.378 user 0m0.342s 00:25:30.378 sys 0m0.031s 00:25:30.378 05:36:50 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:30.378 05:36:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:25:30.378 ************************************ 00:25:30.378 END TEST env_memory 00:25:30.378 ************************************ 00:25:30.637 05:36:50 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:30.637 05:36:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:30.637 05:36:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:30.637 05:36:50 env -- common/autotest_common.sh@10 -- # set +x 00:25:30.637 ************************************ 00:25:30.637 START TEST env_vtophys 00:25:30.637 ************************************ 00:25:30.637 05:36:50 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:25:30.637 EAL: lib.eal log level changed from notice to debug 00:25:30.637 EAL: Detected lcore 0 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 1 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 2 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 3 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 4 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 5 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 6 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 7 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 8 as core 0 on socket 0 00:25:30.637 EAL: Detected lcore 9 as core 0 on socket 0 00:25:30.637 EAL: Maximum logical cores by configuration: 128 00:25:30.637 EAL: Detected CPU lcores: 10 00:25:30.637 EAL: Detected NUMA nodes: 1 00:25:30.637 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:25:30.637 EAL: Detected shared linkage of DPDK 00:25:30.637 EAL: No shared files mode enabled, IPC will be disabled 00:25:30.637 EAL: Selected IOVA mode 'PA' 00:25:30.637 EAL: Probing VFIO support... 00:25:30.637 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:30.637 EAL: VFIO modules not loaded, skipping VFIO support... 00:25:30.637 EAL: Ask a virtual area of 0x2e000 bytes 00:25:30.637 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:25:30.637 EAL: Setting up physically contiguous memory... 00:25:30.637 EAL: Setting maximum number of open files to 524288 00:25:30.637 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:25:30.637 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:25:30.637 EAL: Ask a virtual area of 0x61000 bytes 00:25:30.637 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:25:30.637 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:30.637 EAL: Ask a virtual area of 0x400000000 bytes 00:25:30.637 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:25:30.637 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:25:30.637 EAL: Ask a virtual area of 0x61000 bytes 00:25:30.637 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:25:30.637 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:30.637 EAL: Ask a virtual area of 0x400000000 bytes 00:25:30.637 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:25:30.637 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:25:30.637 EAL: Ask a virtual area of 0x61000 bytes 00:25:30.637 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:25:30.637 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:30.637 EAL: Ask a virtual area of 0x400000000 bytes 00:25:30.637 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:25:30.637 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:25:30.637 EAL: Ask a virtual area of 0x61000 bytes 00:25:30.637 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:25:30.637 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:25:30.637 EAL: Ask a virtual area of 0x400000000 bytes 00:25:30.637 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:25:30.637 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:25:30.637 EAL: Hugepages will be freed exactly as allocated. 00:25:30.637 EAL: No shared files mode enabled, IPC is disabled 00:25:30.637 EAL: No shared files mode enabled, IPC is disabled 00:25:30.637 EAL: TSC frequency is ~2290000 KHz 00:25:30.637 EAL: Main lcore 0 is ready (tid=7f2240e17a40;cpuset=[0]) 00:25:30.637 EAL: Trying to obtain current memory policy. 00:25:30.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:30.637 EAL: Restoring previous memory policy: 0 00:25:30.637 EAL: request: mp_malloc_sync 00:25:30.637 EAL: No shared files mode enabled, IPC is disabled 00:25:30.637 EAL: Heap on socket 0 was expanded by 2MB 00:25:30.637 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:25:30.637 EAL: No PCI address specified using 'addr=' in: bus=pci 00:25:30.637 EAL: Mem event callback 'spdk:(nil)' registered 00:25:30.637 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:25:30.896 00:25:30.896 00:25:30.896 CUnit - A unit testing framework for C - Version 2.1-3 00:25:30.896 http://cunit.sourceforge.net/ 00:25:30.896 00:25:30.896 00:25:30.896 Suite: components_suite 00:25:31.463 Test: vtophys_malloc_test ...passed 00:25:31.463 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:25:31.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.463 EAL: Restoring previous memory policy: 4 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was expanded by 4MB 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was shrunk by 4MB 00:25:31.463 EAL: Trying to obtain current memory policy. 00:25:31.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.463 EAL: Restoring previous memory policy: 4 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was expanded by 6MB 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was shrunk by 6MB 00:25:31.463 EAL: Trying to obtain current memory policy. 00:25:31.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.463 EAL: Restoring previous memory policy: 4 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was expanded by 10MB 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was shrunk by 10MB 00:25:31.463 EAL: Trying to obtain current memory policy. 00:25:31.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.463 EAL: Restoring previous memory policy: 4 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was expanded by 18MB 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was shrunk by 18MB 00:25:31.463 EAL: Trying to obtain current memory policy. 00:25:31.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.463 EAL: Restoring previous memory policy: 4 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was expanded by 34MB 00:25:31.463 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.463 EAL: request: mp_malloc_sync 00:25:31.463 EAL: No shared files mode enabled, IPC is disabled 00:25:31.463 EAL: Heap on socket 0 was shrunk by 34MB 00:25:31.722 EAL: Trying to obtain current memory policy. 00:25:31.722 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.722 EAL: Restoring previous memory policy: 4 00:25:31.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.722 EAL: request: mp_malloc_sync 00:25:31.722 EAL: No shared files mode enabled, IPC is disabled 00:25:31.722 EAL: Heap on socket 0 was expanded by 66MB 00:25:31.722 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.722 EAL: request: mp_malloc_sync 00:25:31.722 EAL: No shared files mode enabled, IPC is disabled 00:25:31.722 EAL: Heap on socket 0 was shrunk by 66MB 00:25:31.981 EAL: Trying to obtain current memory policy. 00:25:31.981 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:31.981 EAL: Restoring previous memory policy: 4 00:25:31.981 EAL: Calling mem event callback 'spdk:(nil)' 00:25:31.981 EAL: request: mp_malloc_sync 00:25:31.981 EAL: No shared files mode enabled, IPC is disabled 00:25:31.981 EAL: Heap on socket 0 was expanded by 130MB 00:25:32.241 EAL: Calling mem event callback 'spdk:(nil)' 00:25:32.241 EAL: request: mp_malloc_sync 00:25:32.241 EAL: No shared files mode enabled, IPC is disabled 00:25:32.241 EAL: Heap on socket 0 was shrunk by 130MB 00:25:32.500 EAL: Trying to obtain current memory policy. 00:25:32.500 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:32.759 EAL: Restoring previous memory policy: 4 00:25:32.759 EAL: Calling mem event callback 'spdk:(nil)' 00:25:32.759 EAL: request: mp_malloc_sync 00:25:32.759 EAL: No shared files mode enabled, IPC is disabled 00:25:32.759 EAL: Heap on socket 0 was expanded by 258MB 00:25:33.327 EAL: Calling mem event callback 'spdk:(nil)' 00:25:33.327 EAL: request: mp_malloc_sync 00:25:33.327 EAL: No shared files mode enabled, IPC is disabled 00:25:33.327 EAL: Heap on socket 0 was shrunk by 258MB 00:25:33.896 EAL: Trying to obtain current memory policy. 00:25:33.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:34.156 EAL: Restoring previous memory policy: 4 00:25:34.156 EAL: Calling mem event callback 'spdk:(nil)' 00:25:34.156 EAL: request: mp_malloc_sync 00:25:34.156 EAL: No shared files mode enabled, IPC is disabled 00:25:34.156 EAL: Heap on socket 0 was expanded by 514MB 00:25:35.159 EAL: Calling mem event callback 'spdk:(nil)' 00:25:35.418 EAL: request: mp_malloc_sync 00:25:35.419 EAL: No shared files mode enabled, IPC is disabled 00:25:35.419 EAL: Heap on socket 0 was shrunk by 514MB 00:25:36.353 EAL: Trying to obtain current memory policy. 00:25:36.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:25:36.611 EAL: Restoring previous memory policy: 4 00:25:36.611 EAL: Calling mem event callback 'spdk:(nil)' 00:25:36.611 EAL: request: mp_malloc_sync 00:25:36.612 EAL: No shared files mode enabled, IPC is disabled 00:25:36.612 EAL: Heap on socket 0 was expanded by 1026MB 00:25:38.568 EAL: Calling mem event callback 'spdk:(nil)' 00:25:39.137 EAL: request: mp_malloc_sync 00:25:39.137 EAL: No shared files mode enabled, IPC is disabled 00:25:39.137 EAL: Heap on socket 0 was shrunk by 1026MB 00:25:41.039 passed 00:25:41.039 00:25:41.039 Run Summary: Type Total Ran Passed Failed Inactive 00:25:41.039 suites 1 1 n/a 0 0 00:25:41.039 tests 2 2 2 0 0 00:25:41.039 asserts 5789 5789 5789 0 n/a 00:25:41.039 00:25:41.039 Elapsed time = 9.956 seconds 00:25:41.039 EAL: Calling mem event callback 'spdk:(nil)' 00:25:41.039 EAL: request: mp_malloc_sync 00:25:41.039 EAL: No shared files mode enabled, IPC is disabled 00:25:41.039 EAL: Heap on socket 0 was shrunk by 2MB 00:25:41.039 EAL: No shared files mode enabled, IPC is disabled 00:25:41.039 EAL: No shared files mode enabled, IPC is disabled 00:25:41.039 EAL: No shared files mode enabled, IPC is disabled 00:25:41.039 00:25:41.039 real 0m10.310s 00:25:41.039 user 0m8.765s 00:25:41.039 sys 0m1.379s 00:25:41.039 05:37:00 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.040 05:37:00 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:25:41.040 ************************************ 00:25:41.040 END TEST env_vtophys 00:25:41.040 ************************************ 00:25:41.040 05:37:00 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:41.040 05:37:00 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.040 05:37:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.040 05:37:00 env -- common/autotest_common.sh@10 -- # set +x 00:25:41.040 ************************************ 00:25:41.040 START TEST env_pci 00:25:41.040 ************************************ 00:25:41.040 05:37:00 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:25:41.040 00:25:41.040 00:25:41.040 CUnit - A unit testing framework for C - Version 2.1-3 00:25:41.040 http://cunit.sourceforge.net/ 00:25:41.040 00:25:41.040 00:25:41.040 Suite: pci 00:25:41.040 Test: pci_hook ...[2024-11-20 05:37:00.749168] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57980 has claimed it 00:25:41.040 passed 00:25:41.040 00:25:41.040 Run Summary: Type Total Ran Passed Failed Inactive 00:25:41.040 suites 1 1 n/a 0 0 00:25:41.040 tests 1 1 1 0 0 00:25:41.040 asserts 25 25 25 0 n/a 00:25:41.040 00:25:41.040 Elapsed time = 0.010 seconds 00:25:41.040 EAL: Cannot find device (10000:00:01.0) 00:25:41.040 EAL: Failed to attach device on primary process 00:25:41.040 00:25:41.040 real 0m0.103s 00:25:41.040 user 0m0.046s 00:25:41.040 sys 0m0.056s 00:25:41.040 05:37:00 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.040 05:37:00 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:25:41.040 ************************************ 00:25:41.040 END TEST env_pci 00:25:41.040 ************************************ 00:25:41.040 05:37:00 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:25:41.040 05:37:00 env -- env/env.sh@15 -- # uname 00:25:41.040 05:37:00 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:25:41.040 05:37:00 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:25:41.040 05:37:00 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:41.040 05:37:00 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:41.040 05:37:00 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.040 05:37:00 env -- common/autotest_common.sh@10 -- # set +x 00:25:41.040 ************************************ 00:25:41.040 START TEST env_dpdk_post_init 00:25:41.040 ************************************ 00:25:41.040 05:37:00 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:25:41.040 EAL: Detected CPU lcores: 10 00:25:41.040 EAL: Detected NUMA nodes: 1 00:25:41.040 EAL: Detected shared linkage of DPDK 00:25:41.298 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:41.298 EAL: Selected IOVA mode 'PA' 00:25:41.298 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:41.298 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:25:41.298 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:25:41.298 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:25:41.298 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:25:41.298 Starting DPDK initialization... 00:25:41.298 Starting SPDK post initialization... 00:25:41.298 SPDK NVMe probe 00:25:41.298 Attaching to 0000:00:10.0 00:25:41.298 Attaching to 0000:00:11.0 00:25:41.298 Attaching to 0000:00:12.0 00:25:41.298 Attaching to 0000:00:13.0 00:25:41.298 Attached to 0000:00:10.0 00:25:41.298 Attached to 0000:00:11.0 00:25:41.298 Attached to 0000:00:13.0 00:25:41.298 Attached to 0000:00:12.0 00:25:41.298 Cleaning up... 00:25:41.298 00:25:41.298 real 0m0.316s 00:25:41.298 user 0m0.129s 00:25:41.298 sys 0m0.090s 00:25:41.298 05:37:01 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.298 05:37:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:25:41.298 ************************************ 00:25:41.298 END TEST env_dpdk_post_init 00:25:41.298 ************************************ 00:25:41.557 05:37:01 env -- env/env.sh@26 -- # uname 00:25:41.557 05:37:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:25:41.557 05:37:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:41.557 05:37:01 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.557 05:37:01 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.557 05:37:01 env -- common/autotest_common.sh@10 -- # set +x 00:25:41.557 ************************************ 00:25:41.557 START TEST env_mem_callbacks 00:25:41.557 ************************************ 00:25:41.557 05:37:01 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:25:41.557 EAL: Detected CPU lcores: 10 00:25:41.557 EAL: Detected NUMA nodes: 1 00:25:41.557 EAL: Detected shared linkage of DPDK 00:25:41.557 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:25:41.557 EAL: Selected IOVA mode 'PA' 00:25:41.557 TELEMETRY: No legacy callbacks, legacy socket not created 00:25:41.557 00:25:41.557 00:25:41.557 CUnit - A unit testing framework for C - Version 2.1-3 00:25:41.557 http://cunit.sourceforge.net/ 00:25:41.557 00:25:41.557 00:25:41.557 Suite: memory 00:25:41.557 Test: test ... 00:25:41.557 register 0x200000200000 2097152 00:25:41.557 malloc 3145728 00:25:41.557 register 0x200000400000 4194304 00:25:41.557 buf 0x2000004fffc0 len 3145728 PASSED 00:25:41.557 malloc 64 00:25:41.557 buf 0x2000004ffec0 len 64 PASSED 00:25:41.557 malloc 4194304 00:25:41.557 register 0x200000800000 6291456 00:25:41.557 buf 0x2000009fffc0 len 4194304 PASSED 00:25:41.557 free 0x2000004fffc0 3145728 00:25:41.557 free 0x2000004ffec0 64 00:25:41.816 unregister 0x200000400000 4194304 PASSED 00:25:41.816 free 0x2000009fffc0 4194304 00:25:41.816 unregister 0x200000800000 6291456 PASSED 00:25:41.816 malloc 8388608 00:25:41.816 register 0x200000400000 10485760 00:25:41.816 buf 0x2000005fffc0 len 8388608 PASSED 00:25:41.816 free 0x2000005fffc0 8388608 00:25:41.816 unregister 0x200000400000 10485760 PASSED 00:25:41.816 passed 00:25:41.816 00:25:41.816 Run Summary: Type Total Ran Passed Failed Inactive 00:25:41.816 suites 1 1 n/a 0 0 00:25:41.816 tests 1 1 1 0 0 00:25:41.816 asserts 15 15 15 0 n/a 00:25:41.816 00:25:41.816 Elapsed time = 0.087 seconds 00:25:41.816 00:25:41.816 real 0m0.292s 00:25:41.816 user 0m0.116s 00:25:41.816 sys 0m0.074s 00:25:41.816 05:37:01 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.816 05:37:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:25:41.816 ************************************ 00:25:41.816 END TEST env_mem_callbacks 00:25:41.816 ************************************ 00:25:41.816 00:25:41.816 real 0m11.961s 00:25:41.816 user 0m9.594s 00:25:41.816 sys 0m1.997s 00:25:41.816 05:37:01 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.816 05:37:01 env -- common/autotest_common.sh@10 -- # set +x 00:25:41.816 ************************************ 00:25:41.816 END TEST env 00:25:41.816 ************************************ 00:25:41.816 05:37:01 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:41.816 05:37:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.816 05:37:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.816 05:37:01 -- common/autotest_common.sh@10 -- # set +x 00:25:41.816 ************************************ 00:25:41.816 START TEST rpc 00:25:41.816 ************************************ 00:25:41.816 05:37:01 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:25:42.075 * Looking for test storage... 00:25:42.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:42.075 05:37:01 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:42.075 05:37:01 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:42.075 05:37:01 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.076 05:37:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.076 05:37:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.076 05:37:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.076 05:37:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.076 05:37:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.076 05:37:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:42.076 05:37:01 rpc -- scripts/common.sh@345 -- # : 1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.076 05:37:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.076 05:37:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@353 -- # local d=1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.076 05:37:01 rpc -- scripts/common.sh@355 -- # echo 1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.076 05:37:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@353 -- # local d=2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.076 05:37:01 rpc -- scripts/common.sh@355 -- # echo 2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.076 05:37:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.076 05:37:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.076 05:37:01 rpc -- scripts/common.sh@368 -- # return 0 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.076 --rc genhtml_branch_coverage=1 00:25:42.076 --rc genhtml_function_coverage=1 00:25:42.076 --rc genhtml_legend=1 00:25:42.076 --rc geninfo_all_blocks=1 00:25:42.076 --rc geninfo_unexecuted_blocks=1 00:25:42.076 00:25:42.076 ' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.076 --rc genhtml_branch_coverage=1 00:25:42.076 --rc genhtml_function_coverage=1 00:25:42.076 --rc genhtml_legend=1 00:25:42.076 --rc geninfo_all_blocks=1 00:25:42.076 --rc geninfo_unexecuted_blocks=1 00:25:42.076 00:25:42.076 ' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.076 --rc genhtml_branch_coverage=1 00:25:42.076 --rc genhtml_function_coverage=1 00:25:42.076 --rc genhtml_legend=1 00:25:42.076 --rc geninfo_all_blocks=1 00:25:42.076 --rc geninfo_unexecuted_blocks=1 00:25:42.076 00:25:42.076 ' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:42.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.076 --rc genhtml_branch_coverage=1 00:25:42.076 --rc genhtml_function_coverage=1 00:25:42.076 --rc genhtml_legend=1 00:25:42.076 --rc geninfo_all_blocks=1 00:25:42.076 --rc geninfo_unexecuted_blocks=1 00:25:42.076 00:25:42.076 ' 00:25:42.076 05:37:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58107 00:25:42.076 05:37:01 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:25:42.076 05:37:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:42.076 05:37:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58107 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@833 -- # '[' -z 58107 ']' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.076 05:37:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:42.076 [2024-11-20 05:37:01.988569] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:42.076 [2024-11-20 05:37:01.988710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58107 ] 00:25:42.335 [2024-11-20 05:37:02.169787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.595 [2024-11-20 05:37:02.292622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:25:42.595 [2024-11-20 05:37:02.292687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58107' to capture a snapshot of events at runtime. 00:25:42.595 [2024-11-20 05:37:02.292698] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.595 [2024-11-20 05:37:02.292707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.595 [2024-11-20 05:37:02.292715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58107 for offline analysis/debug. 00:25:42.595 [2024-11-20 05:37:02.294022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.529 05:37:03 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:43.529 05:37:03 rpc -- common/autotest_common.sh@866 -- # return 0 00:25:43.529 05:37:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:43.529 05:37:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:25:43.529 05:37:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:25:43.529 05:37:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:25:43.529 05:37:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:43.529 05:37:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:43.529 05:37:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:43.529 ************************************ 00:25:43.529 START TEST rpc_integrity 00:25:43.529 ************************************ 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:43.529 { 00:25:43.529 "name": "Malloc0", 00:25:43.529 "aliases": [ 00:25:43.529 "28007cb5-eb2e-4461-adbe-5327a9745d45" 00:25:43.529 ], 00:25:43.529 "product_name": "Malloc disk", 00:25:43.529 "block_size": 512, 00:25:43.529 "num_blocks": 16384, 00:25:43.529 "uuid": "28007cb5-eb2e-4461-adbe-5327a9745d45", 00:25:43.529 "assigned_rate_limits": { 00:25:43.529 "rw_ios_per_sec": 0, 00:25:43.529 "rw_mbytes_per_sec": 0, 00:25:43.529 "r_mbytes_per_sec": 0, 00:25:43.529 "w_mbytes_per_sec": 0 00:25:43.529 }, 00:25:43.529 "claimed": false, 00:25:43.529 "zoned": false, 00:25:43.529 "supported_io_types": { 00:25:43.529 "read": true, 00:25:43.529 "write": true, 00:25:43.529 "unmap": true, 00:25:43.529 "flush": true, 00:25:43.529 "reset": true, 00:25:43.529 "nvme_admin": false, 00:25:43.529 "nvme_io": false, 00:25:43.529 "nvme_io_md": false, 00:25:43.529 "write_zeroes": true, 00:25:43.529 "zcopy": true, 00:25:43.529 "get_zone_info": false, 00:25:43.529 "zone_management": false, 00:25:43.529 "zone_append": false, 00:25:43.529 "compare": false, 00:25:43.529 "compare_and_write": false, 00:25:43.529 "abort": true, 00:25:43.529 "seek_hole": false, 00:25:43.529 "seek_data": false, 00:25:43.529 "copy": true, 00:25:43.529 "nvme_iov_md": false 00:25:43.529 }, 00:25:43.529 "memory_domains": [ 00:25:43.529 { 00:25:43.529 "dma_device_id": "system", 00:25:43.529 "dma_device_type": 1 00:25:43.529 }, 00:25:43.529 { 00:25:43.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.529 "dma_device_type": 2 00:25:43.529 } 00:25:43.529 ], 00:25:43.529 "driver_specific": {} 00:25:43.529 } 00:25:43.529 ]' 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:43.529 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.529 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.788 [2024-11-20 05:37:03.452576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:25:43.788 [2024-11-20 05:37:03.452701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.788 [2024-11-20 05:37:03.452745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:43.788 [2024-11-20 05:37:03.452776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.788 [2024-11-20 05:37:03.455861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.788 [2024-11-20 05:37:03.455931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:43.788 Passthru0 00:25:43.788 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.788 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:43.788 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.788 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.788 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.788 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:43.788 { 00:25:43.788 "name": "Malloc0", 00:25:43.788 "aliases": [ 00:25:43.788 "28007cb5-eb2e-4461-adbe-5327a9745d45" 00:25:43.788 ], 00:25:43.788 "product_name": "Malloc disk", 00:25:43.788 "block_size": 512, 00:25:43.788 "num_blocks": 16384, 00:25:43.788 "uuid": "28007cb5-eb2e-4461-adbe-5327a9745d45", 00:25:43.788 "assigned_rate_limits": { 00:25:43.788 "rw_ios_per_sec": 0, 00:25:43.788 "rw_mbytes_per_sec": 0, 00:25:43.788 "r_mbytes_per_sec": 0, 00:25:43.788 "w_mbytes_per_sec": 0 00:25:43.788 }, 00:25:43.788 "claimed": true, 00:25:43.788 "claim_type": "exclusive_write", 00:25:43.788 "zoned": false, 00:25:43.788 "supported_io_types": { 00:25:43.788 "read": true, 00:25:43.788 "write": true, 00:25:43.788 "unmap": true, 00:25:43.788 "flush": true, 00:25:43.788 "reset": true, 00:25:43.788 "nvme_admin": false, 00:25:43.788 "nvme_io": false, 00:25:43.788 "nvme_io_md": false, 00:25:43.788 "write_zeroes": true, 00:25:43.788 "zcopy": true, 00:25:43.788 "get_zone_info": false, 00:25:43.788 "zone_management": false, 00:25:43.788 "zone_append": false, 00:25:43.788 "compare": false, 00:25:43.788 "compare_and_write": false, 00:25:43.788 "abort": true, 00:25:43.788 "seek_hole": false, 00:25:43.788 "seek_data": false, 00:25:43.788 "copy": true, 00:25:43.788 "nvme_iov_md": false 00:25:43.788 }, 00:25:43.788 "memory_domains": [ 00:25:43.788 { 00:25:43.788 "dma_device_id": "system", 00:25:43.788 "dma_device_type": 1 00:25:43.788 }, 00:25:43.788 { 00:25:43.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.788 "dma_device_type": 2 00:25:43.788 } 00:25:43.788 ], 00:25:43.788 "driver_specific": {} 00:25:43.788 }, 00:25:43.788 { 00:25:43.788 "name": "Passthru0", 00:25:43.788 "aliases": [ 00:25:43.788 "80e6a6b5-a862-5c90-b507-3d18e9d88bf3" 00:25:43.788 ], 00:25:43.788 "product_name": "passthru", 00:25:43.788 "block_size": 512, 00:25:43.788 "num_blocks": 16384, 00:25:43.788 "uuid": "80e6a6b5-a862-5c90-b507-3d18e9d88bf3", 00:25:43.788 "assigned_rate_limits": { 00:25:43.788 "rw_ios_per_sec": 0, 00:25:43.788 "rw_mbytes_per_sec": 0, 00:25:43.788 "r_mbytes_per_sec": 0, 00:25:43.788 "w_mbytes_per_sec": 0 00:25:43.788 }, 00:25:43.788 "claimed": false, 00:25:43.788 "zoned": false, 00:25:43.788 "supported_io_types": { 00:25:43.788 "read": true, 00:25:43.788 "write": true, 00:25:43.788 "unmap": true, 00:25:43.788 "flush": true, 00:25:43.789 "reset": true, 00:25:43.789 "nvme_admin": false, 00:25:43.789 "nvme_io": false, 00:25:43.789 "nvme_io_md": false, 00:25:43.789 "write_zeroes": true, 00:25:43.789 "zcopy": true, 00:25:43.789 "get_zone_info": false, 00:25:43.789 "zone_management": false, 00:25:43.789 "zone_append": false, 00:25:43.789 "compare": false, 00:25:43.789 "compare_and_write": false, 00:25:43.789 "abort": true, 00:25:43.789 "seek_hole": false, 00:25:43.789 "seek_data": false, 00:25:43.789 "copy": true, 00:25:43.789 "nvme_iov_md": false 00:25:43.789 }, 00:25:43.789 "memory_domains": [ 00:25:43.789 { 00:25:43.789 "dma_device_id": "system", 00:25:43.789 "dma_device_type": 1 00:25:43.789 }, 00:25:43.789 { 00:25:43.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.789 "dma_device_type": 2 00:25:43.789 } 00:25:43.789 ], 00:25:43.789 "driver_specific": { 00:25:43.789 "passthru": { 00:25:43.789 "name": "Passthru0", 00:25:43.789 "base_bdev_name": "Malloc0" 00:25:43.789 } 00:25:43.789 } 00:25:43.789 } 00:25:43.789 ]' 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:43.789 05:37:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:43.789 00:25:43.789 real 0m0.372s 00:25:43.789 user 0m0.225s 00:25:43.789 sys 0m0.037s 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 ************************************ 00:25:43.789 END TEST rpc_integrity 00:25:43.789 ************************************ 00:25:43.789 05:37:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:25:43.789 05:37:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:43.789 05:37:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:43.789 05:37:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 ************************************ 00:25:43.789 START TEST rpc_plugins 00:25:43.789 ************************************ 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:25:43.789 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.789 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:25:43.789 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.789 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:25:44.048 { 00:25:44.048 "name": "Malloc1", 00:25:44.048 "aliases": [ 00:25:44.048 "e59d2c96-c9b4-44f5-bf63-0f067e7b95d4" 00:25:44.048 ], 00:25:44.048 "product_name": "Malloc disk", 00:25:44.048 "block_size": 4096, 00:25:44.048 "num_blocks": 256, 00:25:44.048 "uuid": "e59d2c96-c9b4-44f5-bf63-0f067e7b95d4", 00:25:44.048 "assigned_rate_limits": { 00:25:44.048 "rw_ios_per_sec": 0, 00:25:44.048 "rw_mbytes_per_sec": 0, 00:25:44.048 "r_mbytes_per_sec": 0, 00:25:44.048 "w_mbytes_per_sec": 0 00:25:44.048 }, 00:25:44.048 "claimed": false, 00:25:44.048 "zoned": false, 00:25:44.048 "supported_io_types": { 00:25:44.048 "read": true, 00:25:44.048 "write": true, 00:25:44.048 "unmap": true, 00:25:44.048 "flush": true, 00:25:44.048 "reset": true, 00:25:44.048 "nvme_admin": false, 00:25:44.048 "nvme_io": false, 00:25:44.048 "nvme_io_md": false, 00:25:44.048 "write_zeroes": true, 00:25:44.048 "zcopy": true, 00:25:44.048 "get_zone_info": false, 00:25:44.048 "zone_management": false, 00:25:44.048 "zone_append": false, 00:25:44.048 "compare": false, 00:25:44.048 "compare_and_write": false, 00:25:44.048 "abort": true, 00:25:44.048 "seek_hole": false, 00:25:44.048 "seek_data": false, 00:25:44.048 "copy": true, 00:25:44.048 "nvme_iov_md": false 00:25:44.048 }, 00:25:44.048 "memory_domains": [ 00:25:44.048 { 00:25:44.048 "dma_device_id": "system", 00:25:44.048 "dma_device_type": 1 00:25:44.048 }, 00:25:44.048 { 00:25:44.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.048 "dma_device_type": 2 00:25:44.048 } 00:25:44.048 ], 00:25:44.048 "driver_specific": {} 00:25:44.048 } 00:25:44.048 ]' 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:25:44.048 05:37:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:25:44.048 00:25:44.048 real 0m0.174s 00:25:44.048 user 0m0.097s 00:25:44.048 sys 0m0.030s 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.048 05:37:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 ************************************ 00:25:44.048 END TEST rpc_plugins 00:25:44.048 ************************************ 00:25:44.048 05:37:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:25:44.048 05:37:03 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:44.048 05:37:03 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:44.048 05:37:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 ************************************ 00:25:44.048 START TEST rpc_trace_cmd_test 00:25:44.048 ************************************ 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.048 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:25:44.048 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58107", 00:25:44.048 "tpoint_group_mask": "0x8", 00:25:44.048 "iscsi_conn": { 00:25:44.048 "mask": "0x2", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "scsi": { 00:25:44.048 "mask": "0x4", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "bdev": { 00:25:44.048 "mask": "0x8", 00:25:44.048 "tpoint_mask": "0xffffffffffffffff" 00:25:44.048 }, 00:25:44.048 "nvmf_rdma": { 00:25:44.048 "mask": "0x10", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "nvmf_tcp": { 00:25:44.048 "mask": "0x20", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "ftl": { 00:25:44.048 "mask": "0x40", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "blobfs": { 00:25:44.048 "mask": "0x80", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "dsa": { 00:25:44.048 "mask": "0x200", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "thread": { 00:25:44.048 "mask": "0x400", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "nvme_pcie": { 00:25:44.048 "mask": "0x800", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.048 "iaa": { 00:25:44.048 "mask": "0x1000", 00:25:44.048 "tpoint_mask": "0x0" 00:25:44.048 }, 00:25:44.049 "nvme_tcp": { 00:25:44.049 "mask": "0x2000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 }, 00:25:44.049 "bdev_nvme": { 00:25:44.049 "mask": "0x4000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 }, 00:25:44.049 "sock": { 00:25:44.049 "mask": "0x8000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 }, 00:25:44.049 "blob": { 00:25:44.049 "mask": "0x10000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 }, 00:25:44.049 "bdev_raid": { 00:25:44.049 "mask": "0x20000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 }, 00:25:44.049 "scheduler": { 00:25:44.049 "mask": "0x40000", 00:25:44.049 "tpoint_mask": "0x0" 00:25:44.049 } 00:25:44.049 }' 00:25:44.049 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:25:44.308 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:25:44.308 05:37:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:25:44.308 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:25:44.308 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:25:44.309 00:25:44.309 real 0m0.260s 00:25:44.309 user 0m0.208s 00:25:44.309 sys 0m0.042s 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.309 05:37:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.309 ************************************ 00:25:44.309 END TEST rpc_trace_cmd_test 00:25:44.309 ************************************ 00:25:44.567 05:37:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:25:44.567 05:37:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:25:44.567 05:37:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:25:44.567 05:37:04 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:44.567 05:37:04 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:44.567 05:37:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:44.567 ************************************ 00:25:44.567 START TEST rpc_daemon_integrity 00:25:44.567 ************************************ 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.567 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:25:44.567 { 00:25:44.567 "name": "Malloc2", 00:25:44.567 "aliases": [ 00:25:44.567 "9b1b913a-3b14-40e6-94d5-076cdf615801" 00:25:44.567 ], 00:25:44.567 "product_name": "Malloc disk", 00:25:44.567 "block_size": 512, 00:25:44.567 "num_blocks": 16384, 00:25:44.567 "uuid": "9b1b913a-3b14-40e6-94d5-076cdf615801", 00:25:44.567 "assigned_rate_limits": { 00:25:44.567 "rw_ios_per_sec": 0, 00:25:44.567 "rw_mbytes_per_sec": 0, 00:25:44.567 "r_mbytes_per_sec": 0, 00:25:44.567 "w_mbytes_per_sec": 0 00:25:44.567 }, 00:25:44.567 "claimed": false, 00:25:44.567 "zoned": false, 00:25:44.567 "supported_io_types": { 00:25:44.567 "read": true, 00:25:44.567 "write": true, 00:25:44.567 "unmap": true, 00:25:44.567 "flush": true, 00:25:44.567 "reset": true, 00:25:44.567 "nvme_admin": false, 00:25:44.567 "nvme_io": false, 00:25:44.567 "nvme_io_md": false, 00:25:44.567 "write_zeroes": true, 00:25:44.567 "zcopy": true, 00:25:44.567 "get_zone_info": false, 00:25:44.567 "zone_management": false, 00:25:44.567 "zone_append": false, 00:25:44.567 "compare": false, 00:25:44.567 "compare_and_write": false, 00:25:44.567 "abort": true, 00:25:44.567 "seek_hole": false, 00:25:44.567 "seek_data": false, 00:25:44.568 "copy": true, 00:25:44.568 "nvme_iov_md": false 00:25:44.568 }, 00:25:44.568 "memory_domains": [ 00:25:44.568 { 00:25:44.568 "dma_device_id": "system", 00:25:44.568 "dma_device_type": 1 00:25:44.568 }, 00:25:44.568 { 00:25:44.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.568 "dma_device_type": 2 00:25:44.568 } 00:25:44.568 ], 00:25:44.568 "driver_specific": {} 00:25:44.568 } 00:25:44.568 ]' 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.568 [2024-11-20 05:37:04.412526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:25:44.568 [2024-11-20 05:37:04.412616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.568 [2024-11-20 05:37:04.412643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:25:44.568 [2024-11-20 05:37:04.412656] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.568 [2024-11-20 05:37:04.415283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.568 [2024-11-20 05:37:04.415334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:25:44.568 Passthru0 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:25:44.568 { 00:25:44.568 "name": "Malloc2", 00:25:44.568 "aliases": [ 00:25:44.568 "9b1b913a-3b14-40e6-94d5-076cdf615801" 00:25:44.568 ], 00:25:44.568 "product_name": "Malloc disk", 00:25:44.568 "block_size": 512, 00:25:44.568 "num_blocks": 16384, 00:25:44.568 "uuid": "9b1b913a-3b14-40e6-94d5-076cdf615801", 00:25:44.568 "assigned_rate_limits": { 00:25:44.568 "rw_ios_per_sec": 0, 00:25:44.568 "rw_mbytes_per_sec": 0, 00:25:44.568 "r_mbytes_per_sec": 0, 00:25:44.568 "w_mbytes_per_sec": 0 00:25:44.568 }, 00:25:44.568 "claimed": true, 00:25:44.568 "claim_type": "exclusive_write", 00:25:44.568 "zoned": false, 00:25:44.568 "supported_io_types": { 00:25:44.568 "read": true, 00:25:44.568 "write": true, 00:25:44.568 "unmap": true, 00:25:44.568 "flush": true, 00:25:44.568 "reset": true, 00:25:44.568 "nvme_admin": false, 00:25:44.568 "nvme_io": false, 00:25:44.568 "nvme_io_md": false, 00:25:44.568 "write_zeroes": true, 00:25:44.568 "zcopy": true, 00:25:44.568 "get_zone_info": false, 00:25:44.568 "zone_management": false, 00:25:44.568 "zone_append": false, 00:25:44.568 "compare": false, 00:25:44.568 "compare_and_write": false, 00:25:44.568 "abort": true, 00:25:44.568 "seek_hole": false, 00:25:44.568 "seek_data": false, 00:25:44.568 "copy": true, 00:25:44.568 "nvme_iov_md": false 00:25:44.568 }, 00:25:44.568 "memory_domains": [ 00:25:44.568 { 00:25:44.568 "dma_device_id": "system", 00:25:44.568 "dma_device_type": 1 00:25:44.568 }, 00:25:44.568 { 00:25:44.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.568 "dma_device_type": 2 00:25:44.568 } 00:25:44.568 ], 00:25:44.568 "driver_specific": {} 00:25:44.568 }, 00:25:44.568 { 00:25:44.568 "name": "Passthru0", 00:25:44.568 "aliases": [ 00:25:44.568 "08c7ff7d-4bb2-505f-9311-3371d36a25d4" 00:25:44.568 ], 00:25:44.568 "product_name": "passthru", 00:25:44.568 "block_size": 512, 00:25:44.568 "num_blocks": 16384, 00:25:44.568 "uuid": "08c7ff7d-4bb2-505f-9311-3371d36a25d4", 00:25:44.568 "assigned_rate_limits": { 00:25:44.568 "rw_ios_per_sec": 0, 00:25:44.568 "rw_mbytes_per_sec": 0, 00:25:44.568 "r_mbytes_per_sec": 0, 00:25:44.568 "w_mbytes_per_sec": 0 00:25:44.568 }, 00:25:44.568 "claimed": false, 00:25:44.568 "zoned": false, 00:25:44.568 "supported_io_types": { 00:25:44.568 "read": true, 00:25:44.568 "write": true, 00:25:44.568 "unmap": true, 00:25:44.568 "flush": true, 00:25:44.568 "reset": true, 00:25:44.568 "nvme_admin": false, 00:25:44.568 "nvme_io": false, 00:25:44.568 "nvme_io_md": false, 00:25:44.568 "write_zeroes": true, 00:25:44.568 "zcopy": true, 00:25:44.568 "get_zone_info": false, 00:25:44.568 "zone_management": false, 00:25:44.568 "zone_append": false, 00:25:44.568 "compare": false, 00:25:44.568 "compare_and_write": false, 00:25:44.568 "abort": true, 00:25:44.568 "seek_hole": false, 00:25:44.568 "seek_data": false, 00:25:44.568 "copy": true, 00:25:44.568 "nvme_iov_md": false 00:25:44.568 }, 00:25:44.568 "memory_domains": [ 00:25:44.568 { 00:25:44.568 "dma_device_id": "system", 00:25:44.568 "dma_device_type": 1 00:25:44.568 }, 00:25:44.568 { 00:25:44.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.568 "dma_device_type": 2 00:25:44.568 } 00:25:44.568 ], 00:25:44.568 "driver_specific": { 00:25:44.568 "passthru": { 00:25:44.568 "name": "Passthru0", 00:25:44.568 "base_bdev_name": "Malloc2" 00:25:44.568 } 00:25:44.568 } 00:25:44.568 } 00:25:44.568 ]' 00:25:44.568 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:25:44.827 00:25:44.827 real 0m0.361s 00:25:44.827 user 0m0.197s 00:25:44.827 sys 0m0.062s 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:44.827 05:37:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:25:44.827 ************************************ 00:25:44.827 END TEST rpc_daemon_integrity 00:25:44.827 ************************************ 00:25:44.827 05:37:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:44.827 05:37:04 rpc -- rpc/rpc.sh@84 -- # killprocess 58107 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@952 -- # '[' -z 58107 ']' 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@956 -- # kill -0 58107 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@957 -- # uname 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58107 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:44.827 killing process with pid 58107 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58107' 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@971 -- # kill 58107 00:25:44.827 05:37:04 rpc -- common/autotest_common.sh@976 -- # wait 58107 00:25:48.119 00:25:48.119 real 0m5.651s 00:25:48.119 user 0m6.279s 00:25:48.119 sys 0m0.977s 00:25:48.119 05:37:07 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:48.119 05:37:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:25:48.119 ************************************ 00:25:48.119 END TEST rpc 00:25:48.119 ************************************ 00:25:48.119 05:37:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:48.119 05:37:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:48.119 05:37:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:48.119 05:37:07 -- common/autotest_common.sh@10 -- # set +x 00:25:48.119 ************************************ 00:25:48.119 START TEST skip_rpc 00:25:48.119 ************************************ 00:25:48.119 05:37:07 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:25:48.119 * Looking for test storage... 00:25:48.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:25:48.119 05:37:07 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:48.119 05:37:07 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:48.119 05:37:07 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:48.119 05:37:07 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:48.119 05:37:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:48.120 05:37:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.120 --rc genhtml_branch_coverage=1 00:25:48.120 --rc genhtml_function_coverage=1 00:25:48.120 --rc genhtml_legend=1 00:25:48.120 --rc geninfo_all_blocks=1 00:25:48.120 --rc geninfo_unexecuted_blocks=1 00:25:48.120 00:25:48.120 ' 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.120 --rc genhtml_branch_coverage=1 00:25:48.120 --rc genhtml_function_coverage=1 00:25:48.120 --rc genhtml_legend=1 00:25:48.120 --rc geninfo_all_blocks=1 00:25:48.120 --rc geninfo_unexecuted_blocks=1 00:25:48.120 00:25:48.120 ' 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.120 --rc genhtml_branch_coverage=1 00:25:48.120 --rc genhtml_function_coverage=1 00:25:48.120 --rc genhtml_legend=1 00:25:48.120 --rc geninfo_all_blocks=1 00:25:48.120 --rc geninfo_unexecuted_blocks=1 00:25:48.120 00:25:48.120 ' 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:48.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.120 --rc genhtml_branch_coverage=1 00:25:48.120 --rc genhtml_function_coverage=1 00:25:48.120 --rc genhtml_legend=1 00:25:48.120 --rc geninfo_all_blocks=1 00:25:48.120 --rc geninfo_unexecuted_blocks=1 00:25:48.120 00:25:48.120 ' 00:25:48.120 05:37:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:48.120 05:37:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:25:48.120 05:37:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:48.120 05:37:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:48.120 ************************************ 00:25:48.120 START TEST skip_rpc 00:25:48.120 ************************************ 00:25:48.120 05:37:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:25:48.120 05:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58347 00:25:48.120 05:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:25:48.120 05:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:48.120 05:37:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:25:48.120 [2024-11-20 05:37:07.726070] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:48.120 [2024-11-20 05:37:07.726257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58347 ] 00:25:48.120 [2024-11-20 05:37:07.905500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.379 [2024-11-20 05:37:08.050918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58347 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58347 ']' 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58347 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58347 00:25:53.669 killing process with pid 58347 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58347' 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58347 00:25:53.669 05:37:12 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58347 00:25:56.205 00:25:56.205 real 0m8.060s 00:25:56.205 user 0m7.394s 00:25:56.205 sys 0m0.582s 00:25:56.205 05:37:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:56.205 ************************************ 00:25:56.205 END TEST skip_rpc 00:25:56.205 ************************************ 00:25:56.205 05:37:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:56.205 05:37:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:25:56.205 05:37:15 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:56.205 05:37:15 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:56.205 05:37:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:56.205 ************************************ 00:25:56.206 START TEST skip_rpc_with_json 00:25:56.206 ************************************ 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58461 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58461 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58461 ']' 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:56.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:56.206 05:37:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:56.206 [2024-11-20 05:37:15.861112] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:25:56.206 [2024-11-20 05:37:15.861270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58461 ] 00:25:56.206 [2024-11-20 05:37:16.045726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.465 [2024-11-20 05:37:16.202470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:57.842 [2024-11-20 05:37:17.387639] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:25:57.842 request: 00:25:57.842 { 00:25:57.842 "trtype": "tcp", 00:25:57.842 "method": "nvmf_get_transports", 00:25:57.842 "req_id": 1 00:25:57.842 } 00:25:57.842 Got JSON-RPC error response 00:25:57.842 response: 00:25:57.842 { 00:25:57.842 "code": -19, 00:25:57.842 "message": "No such device" 00:25:57.842 } 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:57.842 [2024-11-20 05:37:17.395831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.842 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:25:57.842 { 00:25:57.842 "subsystems": [ 00:25:57.842 { 00:25:57.842 "subsystem": "fsdev", 00:25:57.842 "config": [ 00:25:57.842 { 00:25:57.842 "method": "fsdev_set_opts", 00:25:57.842 "params": { 00:25:57.842 "fsdev_io_pool_size": 65535, 00:25:57.842 "fsdev_io_cache_size": 256 00:25:57.842 } 00:25:57.842 } 00:25:57.842 ] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "keyring", 00:25:57.842 "config": [] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "iobuf", 00:25:57.842 "config": [ 00:25:57.842 { 00:25:57.842 "method": "iobuf_set_options", 00:25:57.842 "params": { 00:25:57.842 "small_pool_count": 8192, 00:25:57.842 "large_pool_count": 1024, 00:25:57.842 "small_bufsize": 8192, 00:25:57.842 "large_bufsize": 135168, 00:25:57.842 "enable_numa": false 00:25:57.842 } 00:25:57.842 } 00:25:57.842 ] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "sock", 00:25:57.842 "config": [ 00:25:57.842 { 00:25:57.842 "method": "sock_set_default_impl", 00:25:57.842 "params": { 00:25:57.842 "impl_name": "posix" 00:25:57.842 } 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "method": "sock_impl_set_options", 00:25:57.842 "params": { 00:25:57.842 "impl_name": "ssl", 00:25:57.842 "recv_buf_size": 4096, 00:25:57.842 "send_buf_size": 4096, 00:25:57.842 "enable_recv_pipe": true, 00:25:57.842 "enable_quickack": false, 00:25:57.842 "enable_placement_id": 0, 00:25:57.842 "enable_zerocopy_send_server": true, 00:25:57.842 "enable_zerocopy_send_client": false, 00:25:57.842 "zerocopy_threshold": 0, 00:25:57.842 "tls_version": 0, 00:25:57.842 "enable_ktls": false 00:25:57.842 } 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "method": "sock_impl_set_options", 00:25:57.842 "params": { 00:25:57.842 "impl_name": "posix", 00:25:57.842 "recv_buf_size": 2097152, 00:25:57.842 "send_buf_size": 2097152, 00:25:57.842 "enable_recv_pipe": true, 00:25:57.842 "enable_quickack": false, 00:25:57.842 "enable_placement_id": 0, 00:25:57.842 "enable_zerocopy_send_server": true, 00:25:57.842 "enable_zerocopy_send_client": false, 00:25:57.842 "zerocopy_threshold": 0, 00:25:57.842 "tls_version": 0, 00:25:57.842 "enable_ktls": false 00:25:57.842 } 00:25:57.842 } 00:25:57.842 ] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "vmd", 00:25:57.842 "config": [] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "accel", 00:25:57.842 "config": [ 00:25:57.842 { 00:25:57.842 "method": "accel_set_options", 00:25:57.842 "params": { 00:25:57.842 "small_cache_size": 128, 00:25:57.842 "large_cache_size": 16, 00:25:57.842 "task_count": 2048, 00:25:57.842 "sequence_count": 2048, 00:25:57.842 "buf_count": 2048 00:25:57.842 } 00:25:57.842 } 00:25:57.842 ] 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "subsystem": "bdev", 00:25:57.842 "config": [ 00:25:57.842 { 00:25:57.842 "method": "bdev_set_options", 00:25:57.842 "params": { 00:25:57.842 "bdev_io_pool_size": 65535, 00:25:57.842 "bdev_io_cache_size": 256, 00:25:57.842 "bdev_auto_examine": true, 00:25:57.842 "iobuf_small_cache_size": 128, 00:25:57.842 "iobuf_large_cache_size": 16 00:25:57.842 } 00:25:57.842 }, 00:25:57.842 { 00:25:57.842 "method": "bdev_raid_set_options", 00:25:57.842 "params": { 00:25:57.842 "process_window_size_kb": 1024, 00:25:57.842 "process_max_bandwidth_mb_sec": 0 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "bdev_iscsi_set_options", 00:25:57.843 "params": { 00:25:57.843 "timeout_sec": 30 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "bdev_nvme_set_options", 00:25:57.843 "params": { 00:25:57.843 "action_on_timeout": "none", 00:25:57.843 "timeout_us": 0, 00:25:57.843 "timeout_admin_us": 0, 00:25:57.843 "keep_alive_timeout_ms": 10000, 00:25:57.843 "arbitration_burst": 0, 00:25:57.843 "low_priority_weight": 0, 00:25:57.843 "medium_priority_weight": 0, 00:25:57.843 "high_priority_weight": 0, 00:25:57.843 "nvme_adminq_poll_period_us": 10000, 00:25:57.843 "nvme_ioq_poll_period_us": 0, 00:25:57.843 "io_queue_requests": 0, 00:25:57.843 "delay_cmd_submit": true, 00:25:57.843 "transport_retry_count": 4, 00:25:57.843 "bdev_retry_count": 3, 00:25:57.843 "transport_ack_timeout": 0, 00:25:57.843 "ctrlr_loss_timeout_sec": 0, 00:25:57.843 "reconnect_delay_sec": 0, 00:25:57.843 "fast_io_fail_timeout_sec": 0, 00:25:57.843 "disable_auto_failback": false, 00:25:57.843 "generate_uuids": false, 00:25:57.843 "transport_tos": 0, 00:25:57.843 "nvme_error_stat": false, 00:25:57.843 "rdma_srq_size": 0, 00:25:57.843 "io_path_stat": false, 00:25:57.843 "allow_accel_sequence": false, 00:25:57.843 "rdma_max_cq_size": 0, 00:25:57.843 "rdma_cm_event_timeout_ms": 0, 00:25:57.843 "dhchap_digests": [ 00:25:57.843 "sha256", 00:25:57.843 "sha384", 00:25:57.843 "sha512" 00:25:57.843 ], 00:25:57.843 "dhchap_dhgroups": [ 00:25:57.843 "null", 00:25:57.843 "ffdhe2048", 00:25:57.843 "ffdhe3072", 00:25:57.843 "ffdhe4096", 00:25:57.843 "ffdhe6144", 00:25:57.843 "ffdhe8192" 00:25:57.843 ] 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "bdev_nvme_set_hotplug", 00:25:57.843 "params": { 00:25:57.843 "period_us": 100000, 00:25:57.843 "enable": false 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "bdev_wait_for_examine" 00:25:57.843 } 00:25:57.843 ] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "scsi", 00:25:57.843 "config": null 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "scheduler", 00:25:57.843 "config": [ 00:25:57.843 { 00:25:57.843 "method": "framework_set_scheduler", 00:25:57.843 "params": { 00:25:57.843 "name": "static" 00:25:57.843 } 00:25:57.843 } 00:25:57.843 ] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "vhost_scsi", 00:25:57.843 "config": [] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "vhost_blk", 00:25:57.843 "config": [] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "ublk", 00:25:57.843 "config": [] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "nbd", 00:25:57.843 "config": [] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "nvmf", 00:25:57.843 "config": [ 00:25:57.843 { 00:25:57.843 "method": "nvmf_set_config", 00:25:57.843 "params": { 00:25:57.843 "discovery_filter": "match_any", 00:25:57.843 "admin_cmd_passthru": { 00:25:57.843 "identify_ctrlr": false 00:25:57.843 }, 00:25:57.843 "dhchap_digests": [ 00:25:57.843 "sha256", 00:25:57.843 "sha384", 00:25:57.843 "sha512" 00:25:57.843 ], 00:25:57.843 "dhchap_dhgroups": [ 00:25:57.843 "null", 00:25:57.843 "ffdhe2048", 00:25:57.843 "ffdhe3072", 00:25:57.843 "ffdhe4096", 00:25:57.843 "ffdhe6144", 00:25:57.843 "ffdhe8192" 00:25:57.843 ] 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "nvmf_set_max_subsystems", 00:25:57.843 "params": { 00:25:57.843 "max_subsystems": 1024 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "nvmf_set_crdt", 00:25:57.843 "params": { 00:25:57.843 "crdt1": 0, 00:25:57.843 "crdt2": 0, 00:25:57.843 "crdt3": 0 00:25:57.843 } 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "method": "nvmf_create_transport", 00:25:57.843 "params": { 00:25:57.843 "trtype": "TCP", 00:25:57.843 "max_queue_depth": 128, 00:25:57.843 "max_io_qpairs_per_ctrlr": 127, 00:25:57.843 "in_capsule_data_size": 4096, 00:25:57.843 "max_io_size": 131072, 00:25:57.843 "io_unit_size": 131072, 00:25:57.843 "max_aq_depth": 128, 00:25:57.843 "num_shared_buffers": 511, 00:25:57.843 "buf_cache_size": 4294967295, 00:25:57.843 "dif_insert_or_strip": false, 00:25:57.843 "zcopy": false, 00:25:57.843 "c2h_success": true, 00:25:57.843 "sock_priority": 0, 00:25:57.843 "abort_timeout_sec": 1, 00:25:57.843 "ack_timeout": 0, 00:25:57.843 "data_wr_pool_size": 0 00:25:57.843 } 00:25:57.843 } 00:25:57.843 ] 00:25:57.843 }, 00:25:57.843 { 00:25:57.843 "subsystem": "iscsi", 00:25:57.843 "config": [ 00:25:57.843 { 00:25:57.843 "method": "iscsi_set_options", 00:25:57.843 "params": { 00:25:57.843 "node_base": "iqn.2016-06.io.spdk", 00:25:57.843 "max_sessions": 128, 00:25:57.843 "max_connections_per_session": 2, 00:25:57.843 "max_queue_depth": 64, 00:25:57.843 "default_time2wait": 2, 00:25:57.843 "default_time2retain": 20, 00:25:57.843 "first_burst_length": 8192, 00:25:57.843 "immediate_data": true, 00:25:57.843 "allow_duplicated_isid": false, 00:25:57.843 "error_recovery_level": 0, 00:25:57.843 "nop_timeout": 60, 00:25:57.843 "nop_in_interval": 30, 00:25:57.843 "disable_chap": false, 00:25:57.843 "require_chap": false, 00:25:57.843 "mutual_chap": false, 00:25:57.843 "chap_group": 0, 00:25:57.843 "max_large_datain_per_connection": 64, 00:25:57.843 "max_r2t_per_connection": 4, 00:25:57.843 "pdu_pool_size": 36864, 00:25:57.843 "immediate_data_pool_size": 16384, 00:25:57.843 "data_out_pool_size": 2048 00:25:57.843 } 00:25:57.843 } 00:25:57.843 ] 00:25:57.843 } 00:25:57.843 ] 00:25:57.843 } 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58461 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58461 ']' 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58461 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58461 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:57.843 killing process with pid 58461 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58461' 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58461 00:25:57.843 05:37:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58461 00:26:01.178 05:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58518 00:26:01.178 05:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:01.178 05:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58518 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58518 ']' 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58518 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58518 00:26:06.453 killing process with pid 58518 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58518' 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58518 00:26:06.453 05:37:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58518 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:26:08.997 00:26:08.997 real 0m12.753s 00:26:08.997 user 0m11.829s 00:26:08.997 sys 0m1.300s 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:08.997 ************************************ 00:26:08.997 END TEST skip_rpc_with_json 00:26:08.997 ************************************ 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:26:08.997 05:37:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.997 ************************************ 00:26:08.997 START TEST skip_rpc_with_delay 00:26:08.997 ************************************ 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:26:08.997 [2024-11-20 05:37:28.668715] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.997 00:26:08.997 real 0m0.192s 00:26:08.997 user 0m0.109s 00:26:08.997 sys 0m0.081s 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:08.997 ************************************ 00:26:08.997 END TEST skip_rpc_with_delay 00:26:08.997 ************************************ 00:26:08.997 05:37:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:26:08.997 05:37:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:26:08.997 05:37:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:26:08.997 05:37:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:08.997 05:37:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:08.997 ************************************ 00:26:08.997 START TEST exit_on_failed_rpc_init 00:26:08.997 ************************************ 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58657 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58657 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58657 ']' 00:26:08.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:08.997 05:37:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:09.257 [2024-11-20 05:37:28.931888] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:09.257 [2024-11-20 05:37:28.932544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:26:09.257 [2024-11-20 05:37:29.116603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.516 [2024-11-20 05:37:29.266535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.895 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:10.895 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:26:10.896 05:37:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:26:10.896 [2024-11-20 05:37:30.542616] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:10.896 [2024-11-20 05:37:30.542895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58681 ] 00:26:10.896 [2024-11-20 05:37:30.727272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.155 [2024-11-20 05:37:30.877629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.155 [2024-11-20 05:37:30.877942] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:11.155 [2024-11-20 05:37:30.877969] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:11.155 [2024-11-20 05:37:30.877992] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58657 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58657 ']' 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58657 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58657 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58657' 00:26:11.414 killing process with pid 58657 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58657 00:26:11.414 05:37:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58657 00:26:14.768 00:26:14.768 real 0m5.145s 00:26:14.768 user 0m5.409s 00:26:14.768 sys 0m0.833s 00:26:14.768 05:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.768 05:37:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:26:14.768 ************************************ 00:26:14.768 END TEST exit_on_failed_rpc_init 00:26:14.768 ************************************ 00:26:14.768 05:37:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:26:14.768 00:26:14.768 real 0m26.624s 00:26:14.768 user 0m24.935s 00:26:14.768 sys 0m3.093s 00:26:14.768 05:37:34 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.768 05:37:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:14.768 ************************************ 00:26:14.768 END TEST skip_rpc 00:26:14.768 ************************************ 00:26:14.768 05:37:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:14.768 05:37:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:14.768 05:37:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:14.768 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:26:14.768 ************************************ 00:26:14.768 START TEST rpc_client 00:26:14.768 ************************************ 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:26:14.768 * Looking for test storage... 00:26:14.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.768 05:37:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:14.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.768 --rc genhtml_branch_coverage=1 00:26:14.768 --rc genhtml_function_coverage=1 00:26:14.768 --rc genhtml_legend=1 00:26:14.768 --rc geninfo_all_blocks=1 00:26:14.768 --rc geninfo_unexecuted_blocks=1 00:26:14.768 00:26:14.768 ' 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:14.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.768 --rc genhtml_branch_coverage=1 00:26:14.768 --rc genhtml_function_coverage=1 00:26:14.768 --rc genhtml_legend=1 00:26:14.768 --rc geninfo_all_blocks=1 00:26:14.768 --rc geninfo_unexecuted_blocks=1 00:26:14.768 00:26:14.768 ' 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:14.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.768 --rc genhtml_branch_coverage=1 00:26:14.768 --rc genhtml_function_coverage=1 00:26:14.768 --rc genhtml_legend=1 00:26:14.768 --rc geninfo_all_blocks=1 00:26:14.768 --rc geninfo_unexecuted_blocks=1 00:26:14.768 00:26:14.768 ' 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:14.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.768 --rc genhtml_branch_coverage=1 00:26:14.768 --rc genhtml_function_coverage=1 00:26:14.768 --rc genhtml_legend=1 00:26:14.768 --rc geninfo_all_blocks=1 00:26:14.768 --rc geninfo_unexecuted_blocks=1 00:26:14.768 00:26:14.768 ' 00:26:14.768 05:37:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:26:14.768 OK 00:26:14.768 05:37:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:26:14.768 00:26:14.768 real 0m0.298s 00:26:14.768 user 0m0.154s 00:26:14.768 sys 0m0.158s 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.768 05:37:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:26:14.768 ************************************ 00:26:14.768 END TEST rpc_client 00:26:14.768 ************************************ 00:26:14.768 05:37:34 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:14.768 05:37:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:14.768 05:37:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:14.768 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:26:14.768 ************************************ 00:26:14.768 START TEST json_config 00:26:14.768 ************************************ 00:26:14.768 05:37:34 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:26:14.768 05:37:34 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:14.768 05:37:34 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:26:14.768 05:37:34 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:14.768 05:37:34 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:14.768 05:37:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.768 05:37:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.768 05:37:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.769 05:37:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.769 05:37:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.769 05:37:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.769 05:37:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.769 05:37:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:26:14.769 05:37:34 json_config -- scripts/common.sh@345 -- # : 1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.769 05:37:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.769 05:37:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@353 -- # local d=1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.769 05:37:34 json_config -- scripts/common.sh@355 -- # echo 1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.769 05:37:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@353 -- # local d=2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.769 05:37:34 json_config -- scripts/common.sh@355 -- # echo 2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.769 05:37:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.769 05:37:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.769 05:37:34 json_config -- scripts/common.sh@368 -- # return 0 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.769 --rc genhtml_branch_coverage=1 00:26:14.769 --rc genhtml_function_coverage=1 00:26:14.769 --rc genhtml_legend=1 00:26:14.769 --rc geninfo_all_blocks=1 00:26:14.769 --rc geninfo_unexecuted_blocks=1 00:26:14.769 00:26:14.769 ' 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.769 --rc genhtml_branch_coverage=1 00:26:14.769 --rc genhtml_function_coverage=1 00:26:14.769 --rc genhtml_legend=1 00:26:14.769 --rc geninfo_all_blocks=1 00:26:14.769 --rc geninfo_unexecuted_blocks=1 00:26:14.769 00:26:14.769 ' 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.769 --rc genhtml_branch_coverage=1 00:26:14.769 --rc genhtml_function_coverage=1 00:26:14.769 --rc genhtml_legend=1 00:26:14.769 --rc geninfo_all_blocks=1 00:26:14.769 --rc geninfo_unexecuted_blocks=1 00:26:14.769 00:26:14.769 ' 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:14.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.769 --rc genhtml_branch_coverage=1 00:26:14.769 --rc genhtml_function_coverage=1 00:26:14.769 --rc genhtml_legend=1 00:26:14.769 --rc geninfo_all_blocks=1 00:26:14.769 --rc geninfo_unexecuted_blocks=1 00:26:14.769 00:26:14.769 ' 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92143e44-19be-4cde-be32-130a9d4b1300 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=92143e44-19be-4cde-be32-130a9d4b1300 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:14.769 05:37:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.769 05:37:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.769 05:37:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.769 05:37:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.769 05:37:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.769 05:37:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.769 05:37:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.769 05:37:34 json_config -- paths/export.sh@5 -- # export PATH 00:26:14.769 05:37:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@51 -- # : 0 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.769 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.769 05:37:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:26:14.769 WARNING: No tests are enabled so not running JSON configuration tests 00:26:14.769 05:37:34 json_config -- json_config/json_config.sh@28 -- # exit 0 00:26:14.769 00:26:14.769 real 0m0.225s 00:26:14.769 user 0m0.137s 00:26:14.769 sys 0m0.094s 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:14.769 05:37:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:26:14.769 ************************************ 00:26:14.769 END TEST json_config 00:26:14.769 ************************************ 00:26:15.029 05:37:34 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:15.029 05:37:34 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:15.029 05:37:34 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:15.029 05:37:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.029 ************************************ 00:26:15.029 START TEST json_config_extra_key 00:26:15.029 ************************************ 00:26:15.029 05:37:34 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:26:15.029 05:37:34 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:15.029 05:37:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:15.029 05:37:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:26:15.029 05:37:34 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.029 05:37:34 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.030 --rc genhtml_branch_coverage=1 00:26:15.030 --rc genhtml_function_coverage=1 00:26:15.030 --rc genhtml_legend=1 00:26:15.030 --rc geninfo_all_blocks=1 00:26:15.030 --rc geninfo_unexecuted_blocks=1 00:26:15.030 00:26:15.030 ' 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.030 --rc genhtml_branch_coverage=1 00:26:15.030 --rc genhtml_function_coverage=1 00:26:15.030 --rc genhtml_legend=1 00:26:15.030 --rc geninfo_all_blocks=1 00:26:15.030 --rc geninfo_unexecuted_blocks=1 00:26:15.030 00:26:15.030 ' 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.030 --rc genhtml_branch_coverage=1 00:26:15.030 --rc genhtml_function_coverage=1 00:26:15.030 --rc genhtml_legend=1 00:26:15.030 --rc geninfo_all_blocks=1 00:26:15.030 --rc geninfo_unexecuted_blocks=1 00:26:15.030 00:26:15.030 ' 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.030 --rc genhtml_branch_coverage=1 00:26:15.030 --rc genhtml_function_coverage=1 00:26:15.030 --rc genhtml_legend=1 00:26:15.030 --rc geninfo_all_blocks=1 00:26:15.030 --rc geninfo_unexecuted_blocks=1 00:26:15.030 00:26:15.030 ' 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92143e44-19be-4cde-be32-130a9d4b1300 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=92143e44-19be-4cde-be32-130a9d4b1300 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.030 05:37:34 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.030 05:37:34 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.030 05:37:34 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.030 05:37:34 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.030 05:37:34 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:26:15.030 05:37:34 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.030 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.030 05:37:34 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:26:15.030 INFO: launching applications... 00:26:15.030 05:37:34 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58896 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:26:15.030 Waiting for target to run... 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58896 /var/tmp/spdk_tgt.sock 00:26:15.030 05:37:34 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58896 ']' 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:15.030 05:37:34 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:26:15.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:26:15.031 05:37:34 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:15.031 05:37:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:15.291 [2024-11-20 05:37:35.022562] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:15.291 [2024-11-20 05:37:35.022738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58896 ] 00:26:15.860 [2024-11-20 05:37:35.608394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.860 [2024-11-20 05:37:35.735861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.798 00:26:16.798 INFO: shutting down applications... 00:26:16.798 05:37:36 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:16.798 05:37:36 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:26:16.798 05:37:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:26:16.798 05:37:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58896 ]] 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58896 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:16.798 05:37:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:17.366 05:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:17.366 05:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:17.366 05:37:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:17.366 05:37:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:17.625 05:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:17.625 05:37:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:17.625 05:37:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:17.625 05:37:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:18.193 05:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:18.193 05:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:18.194 05:37:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:18.194 05:37:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:18.765 05:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:18.765 05:37:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:18.765 05:37:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:18.765 05:37:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:19.336 05:37:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:19.336 05:37:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:19.336 05:37:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:19.336 05:37:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:19.910 05:37:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:19.910 05:37:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:19.910 05:37:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:19.910 05:37:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58896 00:26:20.169 SPDK target shutdown done 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:26:20.169 05:37:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:26:20.169 Success 00:26:20.169 05:37:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:26:20.169 00:26:20.169 real 0m5.347s 00:26:20.169 user 0m4.501s 00:26:20.169 sys 0m0.800s 00:26:20.169 05:37:40 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:20.169 05:37:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:26:20.169 ************************************ 00:26:20.169 END TEST json_config_extra_key 00:26:20.169 ************************************ 00:26:20.428 05:37:40 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:20.428 05:37:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:20.428 05:37:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:20.428 05:37:40 -- common/autotest_common.sh@10 -- # set +x 00:26:20.428 ************************************ 00:26:20.428 START TEST alias_rpc 00:26:20.428 ************************************ 00:26:20.428 05:37:40 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:26:20.428 * Looking for test storage... 00:26:20.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:26:20.428 05:37:40 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:20.428 05:37:40 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:20.428 05:37:40 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:20.428 05:37:40 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@345 -- # : 1 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.428 05:37:40 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:26:20.429 05:37:40 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.429 05:37:40 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:26:20.429 05:37:40 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:26:20.429 05:37:40 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.688 05:37:40 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:26:20.688 05:37:40 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.688 05:37:40 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.688 05:37:40 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.688 05:37:40 alias_rpc -- scripts/common.sh@368 -- # return 0 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:20.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.688 --rc genhtml_branch_coverage=1 00:26:20.688 --rc genhtml_function_coverage=1 00:26:20.688 --rc genhtml_legend=1 00:26:20.688 --rc geninfo_all_blocks=1 00:26:20.688 --rc geninfo_unexecuted_blocks=1 00:26:20.688 00:26:20.688 ' 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:20.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.688 --rc genhtml_branch_coverage=1 00:26:20.688 --rc genhtml_function_coverage=1 00:26:20.688 --rc genhtml_legend=1 00:26:20.688 --rc geninfo_all_blocks=1 00:26:20.688 --rc geninfo_unexecuted_blocks=1 00:26:20.688 00:26:20.688 ' 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:20.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.688 --rc genhtml_branch_coverage=1 00:26:20.688 --rc genhtml_function_coverage=1 00:26:20.688 --rc genhtml_legend=1 00:26:20.688 --rc geninfo_all_blocks=1 00:26:20.688 --rc geninfo_unexecuted_blocks=1 00:26:20.688 00:26:20.688 ' 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:20.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.688 --rc genhtml_branch_coverage=1 00:26:20.688 --rc genhtml_function_coverage=1 00:26:20.688 --rc genhtml_legend=1 00:26:20.688 --rc geninfo_all_blocks=1 00:26:20.688 --rc geninfo_unexecuted_blocks=1 00:26:20.688 00:26:20.688 ' 00:26:20.688 05:37:40 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:26:20.688 05:37:40 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59020 00:26:20.688 05:37:40 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:20.688 05:37:40 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59020 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 59020 ']' 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:20.688 05:37:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:20.688 [2024-11-20 05:37:40.459050] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:20.688 [2024-11-20 05:37:40.459789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:26:20.948 [2024-11-20 05:37:40.642432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.948 [2024-11-20 05:37:40.784853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.327 05:37:41 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.327 05:37:41 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:26:22.327 05:37:41 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:26:22.327 05:37:42 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59020 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 59020 ']' 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 59020 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59020 00:26:22.327 killing process with pid 59020 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59020' 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@971 -- # kill 59020 00:26:22.327 05:37:42 alias_rpc -- common/autotest_common.sh@976 -- # wait 59020 00:26:24.917 ************************************ 00:26:24.917 END TEST alias_rpc 00:26:24.917 ************************************ 00:26:24.917 00:26:24.917 real 0m4.693s 00:26:24.917 user 0m4.502s 00:26:24.917 sys 0m0.778s 00:26:24.917 05:37:44 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:24.917 05:37:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:26:25.177 05:37:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:26:25.177 05:37:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:25.177 05:37:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:25.177 05:37:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:25.177 05:37:44 -- common/autotest_common.sh@10 -- # set +x 00:26:25.177 ************************************ 00:26:25.177 START TEST spdkcli_tcp 00:26:25.177 ************************************ 00:26:25.177 05:37:44 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:26:25.177 * Looking for test storage... 00:26:25.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:25.177 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:25.177 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:26:25.177 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:25.177 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.177 05:37:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:26:25.437 05:37:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.437 05:37:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.437 05:37:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.437 05:37:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.437 --rc genhtml_branch_coverage=1 00:26:25.437 --rc genhtml_function_coverage=1 00:26:25.437 --rc genhtml_legend=1 00:26:25.437 --rc geninfo_all_blocks=1 00:26:25.437 --rc geninfo_unexecuted_blocks=1 00:26:25.437 00:26:25.437 ' 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.437 --rc genhtml_branch_coverage=1 00:26:25.437 --rc genhtml_function_coverage=1 00:26:25.437 --rc genhtml_legend=1 00:26:25.437 --rc geninfo_all_blocks=1 00:26:25.437 --rc geninfo_unexecuted_blocks=1 00:26:25.437 00:26:25.437 ' 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.437 --rc genhtml_branch_coverage=1 00:26:25.437 --rc genhtml_function_coverage=1 00:26:25.437 --rc genhtml_legend=1 00:26:25.437 --rc geninfo_all_blocks=1 00:26:25.437 --rc geninfo_unexecuted_blocks=1 00:26:25.437 00:26:25.437 ' 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.437 --rc genhtml_branch_coverage=1 00:26:25.437 --rc genhtml_function_coverage=1 00:26:25.437 --rc genhtml_legend=1 00:26:25.437 --rc geninfo_all_blocks=1 00:26:25.437 --rc geninfo_unexecuted_blocks=1 00:26:25.437 00:26:25.437 ' 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59127 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:26:25.437 05:37:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59127 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 59127 ']' 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.437 05:37:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:25.437 [2024-11-20 05:37:45.222362] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:25.437 [2024-11-20 05:37:45.222588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59127 ] 00:26:25.697 [2024-11-20 05:37:45.403072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:25.697 [2024-11-20 05:37:45.557596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.697 [2024-11-20 05:37:45.557674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.077 05:37:46 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:27.077 05:37:46 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:26:27.077 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:26:27.077 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59155 00:26:27.077 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:26:27.077 [ 00:26:27.077 "bdev_malloc_delete", 00:26:27.077 "bdev_malloc_create", 00:26:27.077 "bdev_null_resize", 00:26:27.077 "bdev_null_delete", 00:26:27.077 "bdev_null_create", 00:26:27.077 "bdev_nvme_cuse_unregister", 00:26:27.077 "bdev_nvme_cuse_register", 00:26:27.077 "bdev_opal_new_user", 00:26:27.077 "bdev_opal_set_lock_state", 00:26:27.077 "bdev_opal_delete", 00:26:27.077 "bdev_opal_get_info", 00:26:27.077 "bdev_opal_create", 00:26:27.077 "bdev_nvme_opal_revert", 00:26:27.077 "bdev_nvme_opal_init", 00:26:27.077 "bdev_nvme_send_cmd", 00:26:27.077 "bdev_nvme_set_keys", 00:26:27.077 "bdev_nvme_get_path_iostat", 00:26:27.077 "bdev_nvme_get_mdns_discovery_info", 00:26:27.077 "bdev_nvme_stop_mdns_discovery", 00:26:27.077 "bdev_nvme_start_mdns_discovery", 00:26:27.077 "bdev_nvme_set_multipath_policy", 00:26:27.077 "bdev_nvme_set_preferred_path", 00:26:27.078 "bdev_nvme_get_io_paths", 00:26:27.078 "bdev_nvme_remove_error_injection", 00:26:27.078 "bdev_nvme_add_error_injection", 00:26:27.078 "bdev_nvme_get_discovery_info", 00:26:27.078 "bdev_nvme_stop_discovery", 00:26:27.078 "bdev_nvme_start_discovery", 00:26:27.078 "bdev_nvme_get_controller_health_info", 00:26:27.078 "bdev_nvme_disable_controller", 00:26:27.078 "bdev_nvme_enable_controller", 00:26:27.078 "bdev_nvme_reset_controller", 00:26:27.078 "bdev_nvme_get_transport_statistics", 00:26:27.078 "bdev_nvme_apply_firmware", 00:26:27.078 "bdev_nvme_detach_controller", 00:26:27.078 "bdev_nvme_get_controllers", 00:26:27.078 "bdev_nvme_attach_controller", 00:26:27.078 "bdev_nvme_set_hotplug", 00:26:27.078 "bdev_nvme_set_options", 00:26:27.078 "bdev_passthru_delete", 00:26:27.078 "bdev_passthru_create", 00:26:27.078 "bdev_lvol_set_parent_bdev", 00:26:27.078 "bdev_lvol_set_parent", 00:26:27.078 "bdev_lvol_check_shallow_copy", 00:26:27.078 "bdev_lvol_start_shallow_copy", 00:26:27.078 "bdev_lvol_grow_lvstore", 00:26:27.078 "bdev_lvol_get_lvols", 00:26:27.078 "bdev_lvol_get_lvstores", 00:26:27.078 "bdev_lvol_delete", 00:26:27.078 "bdev_lvol_set_read_only", 00:26:27.078 "bdev_lvol_resize", 00:26:27.078 "bdev_lvol_decouple_parent", 00:26:27.078 "bdev_lvol_inflate", 00:26:27.078 "bdev_lvol_rename", 00:26:27.078 "bdev_lvol_clone_bdev", 00:26:27.078 "bdev_lvol_clone", 00:26:27.078 "bdev_lvol_snapshot", 00:26:27.078 "bdev_lvol_create", 00:26:27.078 "bdev_lvol_delete_lvstore", 00:26:27.078 "bdev_lvol_rename_lvstore", 00:26:27.078 "bdev_lvol_create_lvstore", 00:26:27.078 "bdev_raid_set_options", 00:26:27.078 "bdev_raid_remove_base_bdev", 00:26:27.078 "bdev_raid_add_base_bdev", 00:26:27.078 "bdev_raid_delete", 00:26:27.078 "bdev_raid_create", 00:26:27.078 "bdev_raid_get_bdevs", 00:26:27.078 "bdev_error_inject_error", 00:26:27.078 "bdev_error_delete", 00:26:27.078 "bdev_error_create", 00:26:27.078 "bdev_split_delete", 00:26:27.078 "bdev_split_create", 00:26:27.078 "bdev_delay_delete", 00:26:27.078 "bdev_delay_create", 00:26:27.078 "bdev_delay_update_latency", 00:26:27.078 "bdev_zone_block_delete", 00:26:27.078 "bdev_zone_block_create", 00:26:27.078 "blobfs_create", 00:26:27.078 "blobfs_detect", 00:26:27.078 "blobfs_set_cache_size", 00:26:27.078 "bdev_xnvme_delete", 00:26:27.078 "bdev_xnvme_create", 00:26:27.078 "bdev_aio_delete", 00:26:27.078 "bdev_aio_rescan", 00:26:27.078 "bdev_aio_create", 00:26:27.078 "bdev_ftl_set_property", 00:26:27.078 "bdev_ftl_get_properties", 00:26:27.078 "bdev_ftl_get_stats", 00:26:27.078 "bdev_ftl_unmap", 00:26:27.078 "bdev_ftl_unload", 00:26:27.078 "bdev_ftl_delete", 00:26:27.078 "bdev_ftl_load", 00:26:27.078 "bdev_ftl_create", 00:26:27.078 "bdev_virtio_attach_controller", 00:26:27.078 "bdev_virtio_scsi_get_devices", 00:26:27.078 "bdev_virtio_detach_controller", 00:26:27.078 "bdev_virtio_blk_set_hotplug", 00:26:27.078 "bdev_iscsi_delete", 00:26:27.078 "bdev_iscsi_create", 00:26:27.078 "bdev_iscsi_set_options", 00:26:27.078 "accel_error_inject_error", 00:26:27.078 "ioat_scan_accel_module", 00:26:27.078 "dsa_scan_accel_module", 00:26:27.078 "iaa_scan_accel_module", 00:26:27.078 "keyring_file_remove_key", 00:26:27.078 "keyring_file_add_key", 00:26:27.078 "keyring_linux_set_options", 00:26:27.078 "fsdev_aio_delete", 00:26:27.078 "fsdev_aio_create", 00:26:27.078 "iscsi_get_histogram", 00:26:27.078 "iscsi_enable_histogram", 00:26:27.078 "iscsi_set_options", 00:26:27.078 "iscsi_get_auth_groups", 00:26:27.078 "iscsi_auth_group_remove_secret", 00:26:27.078 "iscsi_auth_group_add_secret", 00:26:27.078 "iscsi_delete_auth_group", 00:26:27.078 "iscsi_create_auth_group", 00:26:27.078 "iscsi_set_discovery_auth", 00:26:27.078 "iscsi_get_options", 00:26:27.078 "iscsi_target_node_request_logout", 00:26:27.078 "iscsi_target_node_set_redirect", 00:26:27.078 "iscsi_target_node_set_auth", 00:26:27.078 "iscsi_target_node_add_lun", 00:26:27.078 "iscsi_get_stats", 00:26:27.078 "iscsi_get_connections", 00:26:27.078 "iscsi_portal_group_set_auth", 00:26:27.078 "iscsi_start_portal_group", 00:26:27.078 "iscsi_delete_portal_group", 00:26:27.078 "iscsi_create_portal_group", 00:26:27.078 "iscsi_get_portal_groups", 00:26:27.078 "iscsi_delete_target_node", 00:26:27.078 "iscsi_target_node_remove_pg_ig_maps", 00:26:27.078 "iscsi_target_node_add_pg_ig_maps", 00:26:27.078 "iscsi_create_target_node", 00:26:27.078 "iscsi_get_target_nodes", 00:26:27.078 "iscsi_delete_initiator_group", 00:26:27.078 "iscsi_initiator_group_remove_initiators", 00:26:27.078 "iscsi_initiator_group_add_initiators", 00:26:27.078 "iscsi_create_initiator_group", 00:26:27.078 "iscsi_get_initiator_groups", 00:26:27.078 "nvmf_set_crdt", 00:26:27.078 "nvmf_set_config", 00:26:27.078 "nvmf_set_max_subsystems", 00:26:27.078 "nvmf_stop_mdns_prr", 00:26:27.078 "nvmf_publish_mdns_prr", 00:26:27.078 "nvmf_subsystem_get_listeners", 00:26:27.078 "nvmf_subsystem_get_qpairs", 00:26:27.078 "nvmf_subsystem_get_controllers", 00:26:27.078 "nvmf_get_stats", 00:26:27.078 "nvmf_get_transports", 00:26:27.078 "nvmf_create_transport", 00:26:27.078 "nvmf_get_targets", 00:26:27.078 "nvmf_delete_target", 00:26:27.078 "nvmf_create_target", 00:26:27.078 "nvmf_subsystem_allow_any_host", 00:26:27.078 "nvmf_subsystem_set_keys", 00:26:27.078 "nvmf_subsystem_remove_host", 00:26:27.078 "nvmf_subsystem_add_host", 00:26:27.078 "nvmf_ns_remove_host", 00:26:27.078 "nvmf_ns_add_host", 00:26:27.078 "nvmf_subsystem_remove_ns", 00:26:27.078 "nvmf_subsystem_set_ns_ana_group", 00:26:27.078 "nvmf_subsystem_add_ns", 00:26:27.078 "nvmf_subsystem_listener_set_ana_state", 00:26:27.078 "nvmf_discovery_get_referrals", 00:26:27.078 "nvmf_discovery_remove_referral", 00:26:27.078 "nvmf_discovery_add_referral", 00:26:27.078 "nvmf_subsystem_remove_listener", 00:26:27.078 "nvmf_subsystem_add_listener", 00:26:27.078 "nvmf_delete_subsystem", 00:26:27.078 "nvmf_create_subsystem", 00:26:27.078 "nvmf_get_subsystems", 00:26:27.078 "env_dpdk_get_mem_stats", 00:26:27.078 "nbd_get_disks", 00:26:27.078 "nbd_stop_disk", 00:26:27.078 "nbd_start_disk", 00:26:27.078 "ublk_recover_disk", 00:26:27.078 "ublk_get_disks", 00:26:27.078 "ublk_stop_disk", 00:26:27.078 "ublk_start_disk", 00:26:27.078 "ublk_destroy_target", 00:26:27.078 "ublk_create_target", 00:26:27.078 "virtio_blk_create_transport", 00:26:27.079 "virtio_blk_get_transports", 00:26:27.079 "vhost_controller_set_coalescing", 00:26:27.079 "vhost_get_controllers", 00:26:27.079 "vhost_delete_controller", 00:26:27.079 "vhost_create_blk_controller", 00:26:27.079 "vhost_scsi_controller_remove_target", 00:26:27.079 "vhost_scsi_controller_add_target", 00:26:27.079 "vhost_start_scsi_controller", 00:26:27.079 "vhost_create_scsi_controller", 00:26:27.079 "thread_set_cpumask", 00:26:27.079 "scheduler_set_options", 00:26:27.079 "framework_get_governor", 00:26:27.079 "framework_get_scheduler", 00:26:27.079 "framework_set_scheduler", 00:26:27.079 "framework_get_reactors", 00:26:27.079 "thread_get_io_channels", 00:26:27.079 "thread_get_pollers", 00:26:27.079 "thread_get_stats", 00:26:27.079 "framework_monitor_context_switch", 00:26:27.079 "spdk_kill_instance", 00:26:27.079 "log_enable_timestamps", 00:26:27.079 "log_get_flags", 00:26:27.079 "log_clear_flag", 00:26:27.079 "log_set_flag", 00:26:27.079 "log_get_level", 00:26:27.079 "log_set_level", 00:26:27.079 "log_get_print_level", 00:26:27.079 "log_set_print_level", 00:26:27.079 "framework_enable_cpumask_locks", 00:26:27.079 "framework_disable_cpumask_locks", 00:26:27.079 "framework_wait_init", 00:26:27.079 "framework_start_init", 00:26:27.079 "scsi_get_devices", 00:26:27.079 "bdev_get_histogram", 00:26:27.079 "bdev_enable_histogram", 00:26:27.079 "bdev_set_qos_limit", 00:26:27.079 "bdev_set_qd_sampling_period", 00:26:27.079 "bdev_get_bdevs", 00:26:27.079 "bdev_reset_iostat", 00:26:27.079 "bdev_get_iostat", 00:26:27.079 "bdev_examine", 00:26:27.079 "bdev_wait_for_examine", 00:26:27.079 "bdev_set_options", 00:26:27.079 "accel_get_stats", 00:26:27.079 "accel_set_options", 00:26:27.079 "accel_set_driver", 00:26:27.079 "accel_crypto_key_destroy", 00:26:27.079 "accel_crypto_keys_get", 00:26:27.079 "accel_crypto_key_create", 00:26:27.079 "accel_assign_opc", 00:26:27.079 "accel_get_module_info", 00:26:27.079 "accel_get_opc_assignments", 00:26:27.079 "vmd_rescan", 00:26:27.079 "vmd_remove_device", 00:26:27.079 "vmd_enable", 00:26:27.079 "sock_get_default_impl", 00:26:27.079 "sock_set_default_impl", 00:26:27.079 "sock_impl_set_options", 00:26:27.079 "sock_impl_get_options", 00:26:27.079 "iobuf_get_stats", 00:26:27.079 "iobuf_set_options", 00:26:27.079 "keyring_get_keys", 00:26:27.079 "framework_get_pci_devices", 00:26:27.079 "framework_get_config", 00:26:27.079 "framework_get_subsystems", 00:26:27.079 "fsdev_set_opts", 00:26:27.079 "fsdev_get_opts", 00:26:27.079 "trace_get_info", 00:26:27.079 "trace_get_tpoint_group_mask", 00:26:27.079 "trace_disable_tpoint_group", 00:26:27.079 "trace_enable_tpoint_group", 00:26:27.079 "trace_clear_tpoint_mask", 00:26:27.079 "trace_set_tpoint_mask", 00:26:27.079 "notify_get_notifications", 00:26:27.079 "notify_get_types", 00:26:27.079 "spdk_get_version", 00:26:27.079 "rpc_get_methods" 00:26:27.079 ] 00:26:27.079 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.079 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:27.079 05:37:46 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59127 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 59127 ']' 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 59127 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:27.079 05:37:46 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59127 00:26:27.338 killing process with pid 59127 00:26:27.338 05:37:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:27.338 05:37:47 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:27.338 05:37:47 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59127' 00:26:27.338 05:37:47 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 59127 00:26:27.338 05:37:47 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 59127 00:26:30.632 ************************************ 00:26:30.632 END TEST spdkcli_tcp 00:26:30.632 ************************************ 00:26:30.632 00:26:30.632 real 0m5.018s 00:26:30.632 user 0m8.899s 00:26:30.632 sys 0m0.834s 00:26:30.632 05:37:49 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:30.632 05:37:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.632 05:37:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:30.632 05:37:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:30.632 05:37:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:30.632 05:37:49 -- common/autotest_common.sh@10 -- # set +x 00:26:30.632 ************************************ 00:26:30.632 START TEST dpdk_mem_utility 00:26:30.632 ************************************ 00:26:30.632 05:37:49 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:26:30.632 * Looking for test storage... 00:26:30.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:26:30.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.632 05:37:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:30.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.632 --rc genhtml_branch_coverage=1 00:26:30.632 --rc genhtml_function_coverage=1 00:26:30.632 --rc genhtml_legend=1 00:26:30.632 --rc geninfo_all_blocks=1 00:26:30.632 --rc geninfo_unexecuted_blocks=1 00:26:30.632 00:26:30.632 ' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:30.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.632 --rc genhtml_branch_coverage=1 00:26:30.632 --rc genhtml_function_coverage=1 00:26:30.632 --rc genhtml_legend=1 00:26:30.632 --rc geninfo_all_blocks=1 00:26:30.632 --rc geninfo_unexecuted_blocks=1 00:26:30.632 00:26:30.632 ' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:30.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.632 --rc genhtml_branch_coverage=1 00:26:30.632 --rc genhtml_function_coverage=1 00:26:30.632 --rc genhtml_legend=1 00:26:30.632 --rc geninfo_all_blocks=1 00:26:30.632 --rc geninfo_unexecuted_blocks=1 00:26:30.632 00:26:30.632 ' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:30.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.632 --rc genhtml_branch_coverage=1 00:26:30.632 --rc genhtml_function_coverage=1 00:26:30.632 --rc genhtml_legend=1 00:26:30.632 --rc geninfo_all_blocks=1 00:26:30.632 --rc geninfo_unexecuted_blocks=1 00:26:30.632 00:26:30.632 ' 00:26:30.632 05:37:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:30.632 05:37:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59260 00:26:30.632 05:37:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59260 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59260 ']' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:30.632 05:37:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:30.632 05:37:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:30.632 [2024-11-20 05:37:50.274347] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:30.632 [2024-11-20 05:37:50.274511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59260 ] 00:26:30.632 [2024-11-20 05:37:50.457523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.892 [2024-11-20 05:37:50.604992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.833 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.833 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:26:31.833 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:26:31.833 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:26:31.833 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.833 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:31.833 { 00:26:31.833 "filename": "/tmp/spdk_mem_dump.txt" 00:26:31.833 } 00:26:31.833 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.833 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:26:32.095 DPDK memory size 824.000000 MiB in 1 heap(s) 00:26:32.095 1 heaps totaling size 824.000000 MiB 00:26:32.095 size: 824.000000 MiB heap id: 0 00:26:32.095 end heaps---------- 00:26:32.095 9 mempools totaling size 603.782043 MiB 00:26:32.095 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:26:32.095 size: 158.602051 MiB name: PDU_data_out_Pool 00:26:32.095 size: 100.555481 MiB name: bdev_io_59260 00:26:32.095 size: 50.003479 MiB name: msgpool_59260 00:26:32.095 size: 36.509338 MiB name: fsdev_io_59260 00:26:32.095 size: 21.763794 MiB name: PDU_Pool 00:26:32.095 size: 19.513306 MiB name: SCSI_TASK_Pool 00:26:32.095 size: 4.133484 MiB name: evtpool_59260 00:26:32.095 size: 0.026123 MiB name: Session_Pool 00:26:32.095 end mempools------- 00:26:32.095 6 memzones totaling size 4.142822 MiB 00:26:32.095 size: 1.000366 MiB name: RG_ring_0_59260 00:26:32.095 size: 1.000366 MiB name: RG_ring_1_59260 00:26:32.095 size: 1.000366 MiB name: RG_ring_4_59260 00:26:32.095 size: 1.000366 MiB name: RG_ring_5_59260 00:26:32.095 size: 0.125366 MiB name: RG_ring_2_59260 00:26:32.095 size: 0.015991 MiB name: RG_ring_3_59260 00:26:32.095 end memzones------- 00:26:32.095 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:26:32.095 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:26:32.095 list of free elements. size: 16.779663 MiB 00:26:32.095 element at address: 0x200006400000 with size: 1.995972 MiB 00:26:32.095 element at address: 0x20000a600000 with size: 1.995972 MiB 00:26:32.095 element at address: 0x200003e00000 with size: 1.991028 MiB 00:26:32.095 element at address: 0x200019500040 with size: 0.999939 MiB 00:26:32.095 element at address: 0x200019900040 with size: 0.999939 MiB 00:26:32.095 element at address: 0x200019a00000 with size: 0.999084 MiB 00:26:32.095 element at address: 0x200032600000 with size: 0.994324 MiB 00:26:32.095 element at address: 0x200000400000 with size: 0.992004 MiB 00:26:32.095 element at address: 0x200019200000 with size: 0.959656 MiB 00:26:32.095 element at address: 0x200019d00040 with size: 0.936401 MiB 00:26:32.095 element at address: 0x200000200000 with size: 0.716980 MiB 00:26:32.095 element at address: 0x20001b400000 with size: 0.560974 MiB 00:26:32.095 element at address: 0x200000c00000 with size: 0.489197 MiB 00:26:32.095 element at address: 0x200019600000 with size: 0.487976 MiB 00:26:32.095 element at address: 0x200019e00000 with size: 0.485413 MiB 00:26:32.095 element at address: 0x200012c00000 with size: 0.433472 MiB 00:26:32.095 element at address: 0x200028800000 with size: 0.390442 MiB 00:26:32.095 element at address: 0x200000800000 with size: 0.350891 MiB 00:26:32.095 list of standard malloc elements. size: 199.289429 MiB 00:26:32.095 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:26:32.095 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:26:32.095 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:26:32.095 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:26:32.095 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:26:32.095 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:26:32.095 element at address: 0x200019deff40 with size: 0.062683 MiB 00:26:32.095 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:26:32.095 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:26:32.095 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:26:32.095 element at address: 0x200012bff040 with size: 0.000305 MiB 00:26:32.096 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200000cff000 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff180 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff280 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff380 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff480 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff580 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff680 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff780 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff880 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bff980 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200019affc40 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:26:32.096 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:26:32.097 element at address: 0x200028863f40 with size: 0.000244 MiB 00:26:32.097 element at address: 0x200028864040 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886af80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b080 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b180 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b280 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b380 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b480 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b580 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b680 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b780 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b880 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886b980 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886be80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c080 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c180 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c280 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c380 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c480 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c580 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c680 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c780 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c880 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886c980 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d080 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d180 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d280 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d380 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d480 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d580 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d680 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d780 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d880 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886d980 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886da80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886db80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886de80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886df80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e080 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e180 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e280 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e380 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e480 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e580 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e680 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e780 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e880 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886e980 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f080 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f180 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f280 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f380 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f480 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f580 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f680 with size: 0.000244 MiB 00:26:32.097 element at address: 0x20002886f780 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886f880 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886f980 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:26:32.098 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:26:32.098 list of memzone associated elements. size: 607.930908 MiB 00:26:32.098 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:26:32.098 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:26:32.098 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:26:32.098 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:26:32.098 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:26:32.098 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59260_0 00:26:32.098 element at address: 0x200000dff340 with size: 48.003113 MiB 00:26:32.098 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59260_0 00:26:32.098 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:26:32.098 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59260_0 00:26:32.098 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:26:32.098 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:26:32.098 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:26:32.098 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:26:32.098 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:26:32.098 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59260_0 00:26:32.098 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:26:32.098 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59260 00:26:32.098 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:26:32.098 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59260 00:26:32.098 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:26:32.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:26:32.098 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:26:32.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:26:32.098 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:26:32.098 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:26:32.098 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:26:32.098 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:26:32.098 element at address: 0x200000cff100 with size: 1.000549 MiB 00:26:32.098 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59260 00:26:32.098 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:26:32.098 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59260 00:26:32.098 element at address: 0x200019affd40 with size: 1.000549 MiB 00:26:32.098 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59260 00:26:32.098 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:26:32.098 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59260 00:26:32.098 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:26:32.098 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59260 00:26:32.098 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:26:32.098 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59260 00:26:32.098 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:26:32.098 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:26:32.098 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:26:32.098 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:26:32.098 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:26:32.098 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:26:32.098 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:26:32.098 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59260 00:26:32.098 element at address: 0x20000085df80 with size: 0.125549 MiB 00:26:32.098 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59260 00:26:32.098 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:26:32.098 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:26:32.098 element at address: 0x200028864140 with size: 0.023804 MiB 00:26:32.098 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:26:32.098 element at address: 0x200000859d40 with size: 0.016174 MiB 00:26:32.098 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59260 00:26:32.098 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:26:32.098 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:26:32.098 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:26:32.098 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59260 00:26:32.098 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:26:32.098 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59260 00:26:32.098 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:26:32.098 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59260 00:26:32.098 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:26:32.098 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:26:32.098 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:26:32.098 05:37:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59260 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59260 ']' 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59260 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59260 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59260' 00:26:32.098 killing process with pid 59260 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59260 00:26:32.098 05:37:51 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59260 00:26:35.436 00:26:35.436 real 0m4.722s 00:26:35.436 user 0m4.495s 00:26:35.436 sys 0m0.754s 00:26:35.436 05:37:54 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:35.436 05:37:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 ************************************ 00:26:35.436 END TEST dpdk_mem_utility 00:26:35.436 ************************************ 00:26:35.436 05:37:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:35.436 05:37:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:35.436 05:37:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:35.436 05:37:54 -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 ************************************ 00:26:35.436 START TEST event 00:26:35.436 ************************************ 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:26:35.436 * Looking for test storage... 00:26:35.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1691 -- # lcov --version 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:35.436 05:37:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.436 05:37:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.436 05:37:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.436 05:37:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.436 05:37:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.436 05:37:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.436 05:37:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.436 05:37:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.436 05:37:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.436 05:37:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.436 05:37:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.436 05:37:54 event -- scripts/common.sh@344 -- # case "$op" in 00:26:35.436 05:37:54 event -- scripts/common.sh@345 -- # : 1 00:26:35.436 05:37:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.436 05:37:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.436 05:37:54 event -- scripts/common.sh@365 -- # decimal 1 00:26:35.436 05:37:54 event -- scripts/common.sh@353 -- # local d=1 00:26:35.436 05:37:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.436 05:37:54 event -- scripts/common.sh@355 -- # echo 1 00:26:35.436 05:37:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.436 05:37:54 event -- scripts/common.sh@366 -- # decimal 2 00:26:35.436 05:37:54 event -- scripts/common.sh@353 -- # local d=2 00:26:35.436 05:37:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.436 05:37:54 event -- scripts/common.sh@355 -- # echo 2 00:26:35.436 05:37:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.436 05:37:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.436 05:37:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.436 05:37:54 event -- scripts/common.sh@368 -- # return 0 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.436 --rc genhtml_branch_coverage=1 00:26:35.436 --rc genhtml_function_coverage=1 00:26:35.436 --rc genhtml_legend=1 00:26:35.436 --rc geninfo_all_blocks=1 00:26:35.436 --rc geninfo_unexecuted_blocks=1 00:26:35.436 00:26:35.436 ' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.436 --rc genhtml_branch_coverage=1 00:26:35.436 --rc genhtml_function_coverage=1 00:26:35.436 --rc genhtml_legend=1 00:26:35.436 --rc geninfo_all_blocks=1 00:26:35.436 --rc geninfo_unexecuted_blocks=1 00:26:35.436 00:26:35.436 ' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.436 --rc genhtml_branch_coverage=1 00:26:35.436 --rc genhtml_function_coverage=1 00:26:35.436 --rc genhtml_legend=1 00:26:35.436 --rc geninfo_all_blocks=1 00:26:35.436 --rc geninfo_unexecuted_blocks=1 00:26:35.436 00:26:35.436 ' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:35.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.436 --rc genhtml_branch_coverage=1 00:26:35.436 --rc genhtml_function_coverage=1 00:26:35.436 --rc genhtml_legend=1 00:26:35.436 --rc geninfo_all_blocks=1 00:26:35.436 --rc geninfo_unexecuted_blocks=1 00:26:35.436 00:26:35.436 ' 00:26:35.436 05:37:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:35.436 05:37:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:26:35.436 05:37:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:26:35.436 05:37:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:35.436 05:37:54 event -- common/autotest_common.sh@10 -- # set +x 00:26:35.436 ************************************ 00:26:35.436 START TEST event_perf 00:26:35.436 ************************************ 00:26:35.436 05:37:54 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:26:35.437 Running I/O for 1 seconds...[2024-11-20 05:37:55.012795] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:35.437 [2024-11-20 05:37:55.012986] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59379 ] 00:26:35.437 [2024-11-20 05:37:55.197135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.437 [2024-11-20 05:37:55.352891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.437 [2024-11-20 05:37:55.352919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.437 [2024-11-20 05:37:55.353158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.437 Running I/O for 1 seconds...[2024-11-20 05:37:55.353193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.864 00:26:36.864 lcore 0: 190899 00:26:36.864 lcore 1: 190900 00:26:36.864 lcore 2: 190899 00:26:36.864 lcore 3: 190897 00:26:36.864 done. 00:26:36.864 00:26:36.864 real 0m1.696s 00:26:36.864 user 0m4.412s 00:26:36.864 sys 0m0.157s 00:26:36.864 05:37:56 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:36.864 05:37:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:26:36.864 ************************************ 00:26:36.864 END TEST event_perf 00:26:36.864 ************************************ 00:26:36.864 05:37:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:36.864 05:37:56 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:36.864 05:37:56 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:36.864 05:37:56 event -- common/autotest_common.sh@10 -- # set +x 00:26:36.864 ************************************ 00:26:36.864 START TEST event_reactor 00:26:36.864 ************************************ 00:26:36.864 05:37:56 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:26:36.864 [2024-11-20 05:37:56.764452] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:36.864 [2024-11-20 05:37:56.764653] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59419 ] 00:26:37.124 [2024-11-20 05:37:56.968782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.384 [2024-11-20 05:37:57.130238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.765 test_start 00:26:38.765 oneshot 00:26:38.765 tick 100 00:26:38.765 tick 100 00:26:38.765 tick 250 00:26:38.765 tick 100 00:26:38.765 tick 100 00:26:38.765 tick 100 00:26:38.765 tick 250 00:26:38.765 tick 500 00:26:38.765 tick 100 00:26:38.765 tick 100 00:26:38.765 tick 250 00:26:38.765 tick 100 00:26:38.765 tick 100 00:26:38.765 test_end 00:26:38.765 ************************************ 00:26:38.765 END TEST event_reactor 00:26:38.765 ************************************ 00:26:38.765 00:26:38.765 real 0m1.668s 00:26:38.765 user 0m1.429s 00:26:38.765 sys 0m0.129s 00:26:38.765 05:37:58 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.765 05:37:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:26:38.765 05:37:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:38.765 05:37:58 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:38.765 05:37:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:38.765 05:37:58 event -- common/autotest_common.sh@10 -- # set +x 00:26:38.765 ************************************ 00:26:38.765 START TEST event_reactor_perf 00:26:38.765 ************************************ 00:26:38.765 05:37:58 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:26:38.765 [2024-11-20 05:37:58.497554] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:38.765 [2024-11-20 05:37:58.497688] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:26:38.765 [2024-11-20 05:37:58.680477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.025 [2024-11-20 05:37:58.832409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.435 test_start 00:26:40.435 test_end 00:26:40.435 Performance: 341841 events per second 00:26:40.435 00:26:40.435 real 0m1.638s 00:26:40.435 user 0m1.415s 00:26:40.435 sys 0m0.114s 00:26:40.435 05:38:00 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:40.435 ************************************ 00:26:40.435 05:38:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:26:40.435 END TEST event_reactor_perf 00:26:40.435 ************************************ 00:26:40.435 05:38:00 event -- event/event.sh@49 -- # uname -s 00:26:40.435 05:38:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:26:40.435 05:38:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:40.435 05:38:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:40.435 05:38:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:40.435 05:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:26:40.435 ************************************ 00:26:40.435 START TEST event_scheduler 00:26:40.435 ************************************ 00:26:40.435 05:38:00 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:26:40.435 * Looking for test storage... 00:26:40.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:26:40.435 05:38:00 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:40.435 05:38:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:26:40.435 05:38:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.695 05:38:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.695 --rc genhtml_branch_coverage=1 00:26:40.695 --rc genhtml_function_coverage=1 00:26:40.695 --rc genhtml_legend=1 00:26:40.695 --rc geninfo_all_blocks=1 00:26:40.695 --rc geninfo_unexecuted_blocks=1 00:26:40.695 00:26:40.695 ' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.695 --rc genhtml_branch_coverage=1 00:26:40.695 --rc genhtml_function_coverage=1 00:26:40.695 --rc genhtml_legend=1 00:26:40.695 --rc geninfo_all_blocks=1 00:26:40.695 --rc geninfo_unexecuted_blocks=1 00:26:40.695 00:26:40.695 ' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.695 --rc genhtml_branch_coverage=1 00:26:40.695 --rc genhtml_function_coverage=1 00:26:40.695 --rc genhtml_legend=1 00:26:40.695 --rc geninfo_all_blocks=1 00:26:40.695 --rc geninfo_unexecuted_blocks=1 00:26:40.695 00:26:40.695 ' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:40.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.695 --rc genhtml_branch_coverage=1 00:26:40.695 --rc genhtml_function_coverage=1 00:26:40.695 --rc genhtml_legend=1 00:26:40.695 --rc geninfo_all_blocks=1 00:26:40.695 --rc geninfo_unexecuted_blocks=1 00:26:40.695 00:26:40.695 ' 00:26:40.695 05:38:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:26:40.695 05:38:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59531 00:26:40.695 05:38:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:26:40.695 05:38:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:26:40.695 05:38:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59531 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59531 ']' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:40.695 05:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:40.695 [2024-11-20 05:38:00.490044] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:40.695 [2024-11-20 05:38:00.490301] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:26:40.955 [2024-11-20 05:38:00.674573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.955 [2024-11-20 05:38:00.829716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.955 [2024-11-20 05:38:00.829846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.955 [2024-11-20 05:38:00.829942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.955 [2024-11-20 05:38:00.829982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.523 05:38:01 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:26:41.524 05:38:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:41.524 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:41.524 POWER: Cannot set governor of lcore 0 to userspace 00:26:41.524 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:41.524 POWER: Cannot set governor of lcore 0 to performance 00:26:41.524 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:41.524 POWER: Cannot set governor of lcore 0 to userspace 00:26:41.524 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:26:41.524 POWER: Cannot set governor of lcore 0 to userspace 00:26:41.524 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:26:41.524 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:26:41.524 POWER: Unable to set Power Management Environment for lcore 0 00:26:41.524 [2024-11-20 05:38:01.395088] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:26:41.524 [2024-11-20 05:38:01.395118] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:26:41.524 [2024-11-20 05:38:01.395129] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:26:41.524 [2024-11-20 05:38:01.395150] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:26:41.524 [2024-11-20 05:38:01.395159] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:26:41.524 [2024-11-20 05:38:01.395169] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.524 05:38:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.524 05:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 [2024-11-20 05:38:01.811900] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:26:42.091 05:38:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:26:42.091 05:38:01 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:42.091 05:38:01 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 ************************************ 00:26:42.091 START TEST scheduler_create_thread 00:26:42.091 ************************************ 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 2 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 3 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 4 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 5 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 6 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.091 7 00:26:42.091 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.092 8 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:42.092 9 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.092 05:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:43.066 10 00:26:43.066 05:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.066 05:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:26:43.066 05:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.066 05:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:44.445 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.445 05:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:26:44.445 05:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:26:44.445 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.445 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:45.014 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.014 05:38:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:26:45.014 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.014 05:38:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:45.949 05:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.949 05:38:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:26:45.949 05:38:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:26:45.949 05:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.949 05:38:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 ************************************ 00:26:46.517 END TEST scheduler_create_thread 00:26:46.517 ************************************ 00:26:46.517 05:38:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.517 00:26:46.517 real 0m4.389s 00:26:46.517 user 0m0.023s 00:26:46.517 sys 0m0.014s 00:26:46.517 05:38:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:46.517 05:38:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 05:38:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:46.517 05:38:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59531 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59531 ']' 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59531 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59531 00:26:46.517 killing process with pid 59531 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59531' 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59531 00:26:46.517 05:38:06 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59531 00:26:46.776 [2024-11-20 05:38:06.494009] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:26:48.158 00:26:48.158 real 0m7.760s 00:26:48.158 user 0m17.935s 00:26:48.158 sys 0m0.622s 00:26:48.158 05:38:07 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:48.158 05:38:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:26:48.158 ************************************ 00:26:48.158 END TEST event_scheduler 00:26:48.158 ************************************ 00:26:48.158 05:38:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:26:48.158 05:38:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:26:48.158 05:38:07 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:48.158 05:38:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:48.158 05:38:07 event -- common/autotest_common.sh@10 -- # set +x 00:26:48.158 ************************************ 00:26:48.158 START TEST app_repeat 00:26:48.158 ************************************ 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59665 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:26:48.158 Process app_repeat pid: 59665 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59665' 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:48.158 spdk_app_start Round 0 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59665 /var/tmp/spdk-nbd.sock 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59665 ']' 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:48.158 05:38:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:48.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:48.158 05:38:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:48.158 [2024-11-20 05:38:08.054555] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:26:48.158 [2024-11-20 05:38:08.054710] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:26:48.418 [2024-11-20 05:38:08.241084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:48.677 [2024-11-20 05:38:08.379046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.677 [2024-11-20 05:38:08.379090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.255 05:38:08 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:49.255 05:38:08 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:26:49.255 05:38:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:49.512 Malloc0 00:26:49.512 05:38:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:50.082 Malloc1 00:26:50.082 05:38:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:50.082 /dev/nbd0 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:50.082 1+0 records in 00:26:50.082 1+0 records out 00:26:50.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419154 s, 9.8 MB/s 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:50.082 05:38:09 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.082 05:38:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:50.651 /dev/nbd1 00:26:50.651 05:38:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:50.651 05:38:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:50.651 1+0 records in 00:26:50.651 1+0 records out 00:26:50.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412409 s, 9.9 MB/s 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:50.651 05:38:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:26:50.651 05:38:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:50.651 05:38:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:50.651 05:38:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:50.652 05:38:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.652 05:38:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:50.913 { 00:26:50.913 "nbd_device": "/dev/nbd0", 00:26:50.913 "bdev_name": "Malloc0" 00:26:50.913 }, 00:26:50.913 { 00:26:50.913 "nbd_device": "/dev/nbd1", 00:26:50.913 "bdev_name": "Malloc1" 00:26:50.913 } 00:26:50.913 ]' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:50.913 { 00:26:50.913 "nbd_device": "/dev/nbd0", 00:26:50.913 "bdev_name": "Malloc0" 00:26:50.913 }, 00:26:50.913 { 00:26:50.913 "nbd_device": "/dev/nbd1", 00:26:50.913 "bdev_name": "Malloc1" 00:26:50.913 } 00:26:50.913 ]' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:50.913 /dev/nbd1' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:50.913 /dev/nbd1' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:50.913 256+0 records in 00:26:50.913 256+0 records out 00:26:50.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126618 s, 82.8 MB/s 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:50.913 256+0 records in 00:26:50.913 256+0 records out 00:26:50.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293888 s, 35.7 MB/s 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:50.913 256+0 records in 00:26:50.913 256+0 records out 00:26:50.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268833 s, 39.0 MB/s 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:50.913 05:38:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:51.171 05:38:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:51.431 05:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:51.690 05:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:51.950 05:38:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:51.950 05:38:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:51.950 05:38:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:51.950 05:38:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:51.950 05:38:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:51.950 05:38:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:52.209 05:38:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:26:53.608 [2024-11-20 05:38:13.521878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:53.865 [2024-11-20 05:38:13.676697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.865 [2024-11-20 05:38:13.676697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.122 [2024-11-20 05:38:13.939791] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:26:54.122 [2024-11-20 05:38:13.939959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:26:55.494 05:38:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:26:55.494 spdk_app_start Round 1 00:26:55.494 05:38:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:26:55.494 05:38:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59665 /var/tmp/spdk-nbd.sock 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59665 ']' 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.494 05:38:15 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:26:55.494 05:38:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:56.058 Malloc0 00:26:56.058 05:38:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:26:56.316 Malloc1 00:26:56.316 05:38:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.316 05:38:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:26:56.573 /dev/nbd0 00:26:56.573 05:38:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:56.573 05:38:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:56.573 05:38:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:56.574 1+0 records in 00:26:56.574 1+0 records out 00:26:56.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369319 s, 11.1 MB/s 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:56.574 05:38:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:26:56.574 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.574 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.574 05:38:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:26:56.831 /dev/nbd1 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:26:56.831 1+0 records in 00:26:56.831 1+0 records out 00:26:56.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388139 s, 10.6 MB/s 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:26:56.831 05:38:16 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:56.831 05:38:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:57.089 { 00:26:57.089 "nbd_device": "/dev/nbd0", 00:26:57.089 "bdev_name": "Malloc0" 00:26:57.089 }, 00:26:57.089 { 00:26:57.089 "nbd_device": "/dev/nbd1", 00:26:57.089 "bdev_name": "Malloc1" 00:26:57.089 } 00:26:57.089 ]' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:57.089 { 00:26:57.089 "nbd_device": "/dev/nbd0", 00:26:57.089 "bdev_name": "Malloc0" 00:26:57.089 }, 00:26:57.089 { 00:26:57.089 "nbd_device": "/dev/nbd1", 00:26:57.089 "bdev_name": "Malloc1" 00:26:57.089 } 00:26:57.089 ]' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:57.089 /dev/nbd1' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:57.089 /dev/nbd1' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:26:57.089 256+0 records in 00:26:57.089 256+0 records out 00:26:57.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01395 s, 75.2 MB/s 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.089 05:38:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:57.347 256+0 records in 00:26:57.347 256+0 records out 00:26:57.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188871 s, 55.5 MB/s 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:57.347 256+0 records in 00:26:57.347 256+0 records out 00:26:57.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034657 s, 30.3 MB/s 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.347 05:38:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:57.605 05:38:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:57.863 05:38:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:58.121 05:38:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:26:58.121 05:38:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:26:58.685 05:38:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:00.062 [2024-11-20 05:38:19.896168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:00.321 [2024-11-20 05:38:20.050478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.321 [2024-11-20 05:38:20.050481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.581 [2024-11-20 05:38:20.310894] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:00.581 [2024-11-20 05:38:20.310980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:01.963 05:38:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:27:01.963 spdk_app_start Round 2 00:27:01.963 05:38:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:27:01.963 05:38:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59665 /var/tmp/spdk-nbd.sock 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59665 ']' 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:01.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:01.963 05:38:21 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:27:01.963 05:38:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:02.221 Malloc0 00:27:02.221 05:38:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:27:02.481 Malloc1 00:27:02.481 05:38:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:27:02.481 05:38:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.482 05:38:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:27:02.741 /dev/nbd0 00:27:02.741 05:38:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:02.741 05:38:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:02.741 1+0 records in 00:27:02.741 1+0 records out 00:27:02.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468231 s, 8.7 MB/s 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:02.741 05:38:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:27:02.741 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:02.741 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.741 05:38:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:27:03.001 /dev/nbd1 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:27:03.001 1+0 records in 00:27:03.001 1+0 records out 00:27:03.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466679 s, 8.8 MB/s 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:27:03.001 05:38:22 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.001 05:38:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:03.262 { 00:27:03.262 "nbd_device": "/dev/nbd0", 00:27:03.262 "bdev_name": "Malloc0" 00:27:03.262 }, 00:27:03.262 { 00:27:03.262 "nbd_device": "/dev/nbd1", 00:27:03.262 "bdev_name": "Malloc1" 00:27:03.262 } 00:27:03.262 ]' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:03.262 { 00:27:03.262 "nbd_device": "/dev/nbd0", 00:27:03.262 "bdev_name": "Malloc0" 00:27:03.262 }, 00:27:03.262 { 00:27:03.262 "nbd_device": "/dev/nbd1", 00:27:03.262 "bdev_name": "Malloc1" 00:27:03.262 } 00:27:03.262 ]' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:03.262 /dev/nbd1' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:03.262 /dev/nbd1' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:27:03.262 256+0 records in 00:27:03.262 256+0 records out 00:27:03.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136939 s, 76.6 MB/s 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:03.262 05:38:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:03.561 256+0 records in 00:27:03.561 256+0 records out 00:27:03.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026709 s, 39.3 MB/s 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:03.561 256+0 records in 00:27:03.561 256+0 records out 00:27:03.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296003 s, 35.4 MB/s 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:03.561 05:38:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.562 05:38:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:03.822 05:38:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:04.082 05:38:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:27:04.082 05:38:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:27:04.651 05:38:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:27:06.032 [2024-11-20 05:38:25.732402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.032 [2024-11-20 05:38:25.848893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.032 [2024-11-20 05:38:25.848898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.291 [2024-11-20 05:38:26.055369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:27:06.291 [2024-11-20 05:38:26.055467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:27:07.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:07.673 05:38:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59665 /var/tmp/spdk-nbd.sock 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59665 ']' 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:07.673 05:38:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:27:07.931 05:38:27 event.app_repeat -- event/event.sh@39 -- # killprocess 59665 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59665 ']' 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59665 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59665 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:07.931 killing process with pid 59665 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59665' 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59665 00:27:07.931 05:38:27 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59665 00:27:09.424 spdk_app_start is called in Round 0. 00:27:09.424 Shutdown signal received, stop current app iteration 00:27:09.424 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:27:09.424 spdk_app_start is called in Round 1. 00:27:09.424 Shutdown signal received, stop current app iteration 00:27:09.424 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:27:09.424 spdk_app_start is called in Round 2. 00:27:09.424 Shutdown signal received, stop current app iteration 00:27:09.424 Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 reinitialization... 00:27:09.424 spdk_app_start is called in Round 3. 00:27:09.424 Shutdown signal received, stop current app iteration 00:27:09.424 05:38:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:27:09.424 05:38:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:27:09.424 00:27:09.424 real 0m21.020s 00:27:09.424 user 0m45.322s 00:27:09.424 sys 0m3.155s 00:27:09.424 05:38:29 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:09.424 05:38:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:27:09.424 ************************************ 00:27:09.424 END TEST app_repeat 00:27:09.424 ************************************ 00:27:09.424 05:38:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:27:09.424 05:38:29 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:09.424 05:38:29 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:09.424 05:38:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:09.424 05:38:29 event -- common/autotest_common.sh@10 -- # set +x 00:27:09.424 ************************************ 00:27:09.424 START TEST cpu_locks 00:27:09.424 ************************************ 00:27:09.424 05:38:29 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:27:09.424 * Looking for test storage... 00:27:09.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:27:09.424 05:38:29 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:09.424 05:38:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:27:09.424 05:38:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:09.424 05:38:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.424 05:38:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.425 05:38:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:09.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.425 --rc genhtml_branch_coverage=1 00:27:09.425 --rc genhtml_function_coverage=1 00:27:09.425 --rc genhtml_legend=1 00:27:09.425 --rc geninfo_all_blocks=1 00:27:09.425 --rc geninfo_unexecuted_blocks=1 00:27:09.425 00:27:09.425 ' 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:09.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.425 --rc genhtml_branch_coverage=1 00:27:09.425 --rc genhtml_function_coverage=1 00:27:09.425 --rc genhtml_legend=1 00:27:09.425 --rc geninfo_all_blocks=1 00:27:09.425 --rc geninfo_unexecuted_blocks=1 00:27:09.425 00:27:09.425 ' 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:09.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.425 --rc genhtml_branch_coverage=1 00:27:09.425 --rc genhtml_function_coverage=1 00:27:09.425 --rc genhtml_legend=1 00:27:09.425 --rc geninfo_all_blocks=1 00:27:09.425 --rc geninfo_unexecuted_blocks=1 00:27:09.425 00:27:09.425 ' 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:09.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.425 --rc genhtml_branch_coverage=1 00:27:09.425 --rc genhtml_function_coverage=1 00:27:09.425 --rc genhtml_legend=1 00:27:09.425 --rc geninfo_all_blocks=1 00:27:09.425 --rc geninfo_unexecuted_blocks=1 00:27:09.425 00:27:09.425 ' 00:27:09.425 05:38:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:27:09.425 05:38:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:27:09.425 05:38:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:27:09.425 05:38:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:09.425 05:38:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:09.425 ************************************ 00:27:09.425 START TEST default_locks 00:27:09.425 ************************************ 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60129 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60129 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60129 ']' 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:09.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:09.425 05:38:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:09.684 [2024-11-20 05:38:29.422027] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:09.685 [2024-11-20 05:38:29.422161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60129 ] 00:27:09.685 [2024-11-20 05:38:29.601855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.943 [2024-11-20 05:38:29.734438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.881 05:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:10.881 05:38:30 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:27:10.881 05:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60129 00:27:10.881 05:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60129 00:27:10.881 05:38:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60129 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 60129 ']' 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 60129 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60129 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:11.450 killing process with pid 60129 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60129' 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 60129 00:27:11.450 05:38:31 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 60129 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60129 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60129 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60129 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 60129 ']' 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:13.988 ERROR: process (pid: 60129) is no longer running 00:27:13.988 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60129) - No such process 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:13.988 00:27:13.988 real 0m4.559s 00:27:13.988 user 0m4.507s 00:27:13.988 sys 0m0.719s 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.988 05:38:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:27:13.988 ************************************ 00:27:13.988 END TEST default_locks 00:27:13.988 ************************************ 00:27:14.248 05:38:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:27:14.248 05:38:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:14.248 05:38:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:14.248 05:38:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 ************************************ 00:27:14.248 START TEST default_locks_via_rpc 00:27:14.248 ************************************ 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60216 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60216 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60216 ']' 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:14.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:14.248 05:38:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:14.248 [2024-11-20 05:38:34.046255] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:14.248 [2024-11-20 05:38:34.046434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60216 ] 00:27:14.508 [2024-11-20 05:38:34.230956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.508 [2024-11-20 05:38:34.365110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:27:15.890 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 60216 ']' 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:15.891 killing process with pid 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60216' 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 60216 00:27:15.891 05:38:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 60216 00:27:19.270 00:27:19.270 real 0m4.730s 00:27:19.270 user 0m4.667s 00:27:19.270 sys 0m0.685s 00:27:19.270 05:38:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:19.270 05:38:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:19.270 ************************************ 00:27:19.270 END TEST default_locks_via_rpc 00:27:19.270 ************************************ 00:27:19.270 05:38:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:27:19.270 05:38:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:19.270 05:38:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:19.270 05:38:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:19.270 ************************************ 00:27:19.270 START TEST non_locking_app_on_locked_coremask 00:27:19.270 ************************************ 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60290 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60290 /var/tmp/spdk.sock 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60290 ']' 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:19.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:19.270 05:38:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:19.270 [2024-11-20 05:38:38.851980] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:19.270 [2024-11-20 05:38:38.852147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60290 ] 00:27:19.270 [2024-11-20 05:38:39.044425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.528 [2024-11-20 05:38:39.213527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60317 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60317 /var/tmp/spdk2.sock 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60317 ']' 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:20.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:20.906 05:38:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:20.906 [2024-11-20 05:38:40.557961] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:20.906 [2024-11-20 05:38:40.558122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60317 ] 00:27:20.906 [2024-11-20 05:38:40.756493] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:20.906 [2024-11-20 05:38:40.756597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.473 [2024-11-20 05:38:41.092463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.051 05:38:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:24.051 05:38:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:24.051 05:38:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60290 00:27:24.051 05:38:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60290 00:27:24.051 05:38:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60290 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60290 ']' 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60290 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60290 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:24.310 killing process with pid 60290 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60290' 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60290 00:27:24.310 05:38:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60290 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60317 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60317 ']' 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60317 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60317 00:27:30.878 05:38:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:30.878 05:38:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:30.878 killing process with pid 60317 00:27:30.878 05:38:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60317' 00:27:30.878 05:38:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60317 00:27:30.878 05:38:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60317 00:27:33.469 00:27:33.469 real 0m14.221s 00:27:33.469 user 0m14.274s 00:27:33.469 sys 0m1.807s 00:27:33.469 05:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:33.469 05:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 ************************************ 00:27:33.469 END TEST non_locking_app_on_locked_coremask 00:27:33.469 ************************************ 00:27:33.469 05:38:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:27:33.469 05:38:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:33.469 05:38:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:33.469 05:38:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 ************************************ 00:27:33.469 START TEST locking_app_on_unlocked_coremask 00:27:33.469 ************************************ 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60486 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60486 /var/tmp/spdk.sock 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60486 ']' 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:33.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:33.469 05:38:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:33.469 [2024-11-20 05:38:53.127981] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:33.469 [2024-11-20 05:38:53.128140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60486 ] 00:27:33.469 [2024-11-20 05:38:53.313857] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:33.469 [2024-11-20 05:38:53.313944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.731 [2024-11-20 05:38:53.472256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60509 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60509 /var/tmp/spdk2.sock 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60509 ']' 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:34.668 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:34.669 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:34.669 05:38:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:34.927 [2024-11-20 05:38:54.684997] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:34.927 [2024-11-20 05:38:54.685170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60509 ] 00:27:35.186 [2024-11-20 05:38:54.868088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.444 [2024-11-20 05:38:55.162231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.984 05:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:37.984 05:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:37.984 05:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60509 00:27:37.984 05:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60509 00:27:37.984 05:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60486 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60486 ']' 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60486 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60486 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:38.243 killing process with pid 60486 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60486' 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60486 00:27:38.243 05:38:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60486 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60509 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60509 ']' 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60509 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60509 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:44.807 killing process with pid 60509 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60509' 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60509 00:27:44.807 05:39:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60509 00:27:46.713 00:27:46.713 real 0m13.425s 00:27:46.713 user 0m13.413s 00:27:46.713 sys 0m1.770s 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:46.713 ************************************ 00:27:46.713 END TEST locking_app_on_unlocked_coremask 00:27:46.713 ************************************ 00:27:46.713 05:39:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:27:46.713 05:39:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:46.713 05:39:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:46.713 05:39:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:46.713 ************************************ 00:27:46.713 START TEST locking_app_on_locked_coremask 00:27:46.713 ************************************ 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60668 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60668 /var/tmp/spdk.sock 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60668 ']' 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:46.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:46.713 05:39:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:46.713 [2024-11-20 05:39:06.619383] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:46.713 [2024-11-20 05:39:06.619597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:27:46.972 [2024-11-20 05:39:06.800155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.230 [2024-11-20 05:39:06.945714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60695 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60695 /var/tmp/spdk2.sock 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60695 /var/tmp/spdk2.sock 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60695 /var/tmp/spdk2.sock 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60695 ']' 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:48.167 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:48.427 [2024-11-20 05:39:08.112157] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:48.427 [2024-11-20 05:39:08.112407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60695 ] 00:27:48.427 [2024-11-20 05:39:08.296668] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60668 has claimed it. 00:27:48.427 [2024-11-20 05:39:08.296756] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:49.036 ERROR: process (pid: 60695) is no longer running 00:27:49.036 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60695) - No such process 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60668 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60668 00:27:49.036 05:39:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60668 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60668 ']' 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60668 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60668 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60668' 00:27:49.296 killing process with pid 60668 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60668 00:27:49.296 05:39:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60668 00:27:52.589 00:27:52.589 real 0m5.438s 00:27:52.589 user 0m5.460s 00:27:52.589 sys 0m0.930s 00:27:52.589 05:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:52.589 05:39:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:52.589 ************************************ 00:27:52.589 END TEST locking_app_on_locked_coremask 00:27:52.589 ************************************ 00:27:52.589 05:39:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:27:52.589 05:39:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:52.589 05:39:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:52.589 05:39:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:52.589 ************************************ 00:27:52.589 START TEST locking_overlapped_coremask 00:27:52.589 ************************************ 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60765 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60765 /var/tmp/spdk.sock 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60765 ']' 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:52.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:52.589 05:39:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:52.589 [2024-11-20 05:39:12.125205] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:52.589 [2024-11-20 05:39:12.125369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:27:52.589 [2024-11-20 05:39:12.306638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:52.589 [2024-11-20 05:39:12.462978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.589 [2024-11-20 05:39:12.463114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.589 [2024-11-20 05:39:12.463155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.968 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:53.968 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:27:53.968 05:39:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60788 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60788 /var/tmp/spdk2.sock 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60788 /var/tmp/spdk2.sock 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60788 /var/tmp/spdk2.sock 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60788 ']' 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:53.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:53.969 05:39:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:53.969 [2024-11-20 05:39:13.719258] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:53.969 [2024-11-20 05:39:13.719990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60788 ] 00:27:54.228 [2024-11-20 05:39:13.909293] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60765 has claimed it. 00:27:54.228 [2024-11-20 05:39:13.909380] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:27:54.487 ERROR: process (pid: 60788) is no longer running 00:27:54.487 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60788) - No such process 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60765 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60765 ']' 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60765 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60765 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60765' 00:27:54.487 killing process with pid 60765 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60765 00:27:54.487 05:39:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60765 00:27:57.771 00:27:57.771 real 0m5.230s 00:27:57.771 user 0m14.091s 00:27:57.771 sys 0m0.843s 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:27:57.771 ************************************ 00:27:57.771 END TEST locking_overlapped_coremask 00:27:57.771 ************************************ 00:27:57.771 05:39:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:27:57.771 05:39:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:57.771 05:39:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:57.771 05:39:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:27:57.771 ************************************ 00:27:57.771 START TEST locking_overlapped_coremask_via_rpc 00:27:57.771 ************************************ 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60858 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60858 /var/tmp/spdk.sock 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60858 ']' 00:27:57.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:57.771 05:39:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:57.771 [2024-11-20 05:39:17.425213] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:57.771 [2024-11-20 05:39:17.426056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60858 ] 00:27:57.771 [2024-11-20 05:39:17.609100] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:57.771 [2024-11-20 05:39:17.609250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.029 [2024-11-20 05:39:17.769358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.029 [2024-11-20 05:39:17.769517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.029 [2024-11-20 05:39:17.769552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60881 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60881 /var/tmp/spdk2.sock 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60881 ']' 00:27:58.969 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:27:59.229 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:59.229 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:27:59.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:27:59.229 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:59.229 05:39:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:59.229 [2024-11-20 05:39:19.015976] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:27:59.229 [2024-11-20 05:39:19.016670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:27:59.492 [2024-11-20 05:39:19.203424] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:27:59.493 [2024-11-20 05:39:19.203516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:59.762 [2024-11-20 05:39:19.520744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.762 [2024-11-20 05:39:19.523951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.763 [2024-11-20 05:39:19.523981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:02.313 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.314 [2024-11-20 05:39:21.830054] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60858 has claimed it. 00:28:02.314 request: 00:28:02.314 { 00:28:02.314 "method": "framework_enable_cpumask_locks", 00:28:02.314 "req_id": 1 00:28:02.314 } 00:28:02.314 Got JSON-RPC error response 00:28:02.314 response: 00:28:02.314 { 00:28:02.314 "code": -32603, 00:28:02.314 "message": "Failed to claim CPU core: 2" 00:28:02.314 } 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60858 /var/tmp/spdk.sock 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60858 ']' 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:02.314 05:39:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60881 /var/tmp/spdk2.sock 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60881 ']' 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:28:02.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:02.314 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:28:02.573 00:28:02.573 real 0m5.061s 00:28:02.573 user 0m1.476s 00:28:02.573 sys 0m0.250s 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:02.573 05:39:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:28:02.573 ************************************ 00:28:02.573 END TEST locking_overlapped_coremask_via_rpc 00:28:02.573 ************************************ 00:28:02.573 05:39:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:28:02.573 05:39:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60858 ]] 00:28:02.573 05:39:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60858 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60858 ']' 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60858 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60858 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:02.573 killing process with pid 60858 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60858' 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60858 00:28:02.573 05:39:22 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60858 00:28:05.864 05:39:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60881 ]] 00:28:05.864 05:39:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60881 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60881 ']' 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60881 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60881 00:28:05.864 killing process with pid 60881 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60881' 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60881 00:28:05.864 05:39:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60881 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60858 ]] 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60858 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60858 ']' 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60858 00:28:09.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60858) - No such process 00:28:09.158 Process with pid 60858 is not found 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60858 is not found' 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60881 ]] 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60881 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60881 ']' 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60881 00:28:09.158 Process with pid 60881 is not found 00:28:09.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60881) - No such process 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60881 is not found' 00:28:09.158 05:39:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:28:09.158 00:28:09.158 real 0m59.518s 00:28:09.158 user 1m40.895s 00:28:09.158 sys 0m8.675s 00:28:09.158 ************************************ 00:28:09.158 END TEST cpu_locks 00:28:09.158 ************************************ 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:09.158 05:39:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:28:09.158 ************************************ 00:28:09.158 END TEST event 00:28:09.158 ************************************ 00:28:09.158 00:28:09.158 real 1m33.908s 00:28:09.158 user 2m51.660s 00:28:09.158 sys 0m13.221s 00:28:09.158 05:39:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:09.158 05:39:28 event -- common/autotest_common.sh@10 -- # set +x 00:28:09.158 05:39:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:09.158 05:39:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:09.158 05:39:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:09.158 05:39:28 -- common/autotest_common.sh@10 -- # set +x 00:28:09.158 ************************************ 00:28:09.158 START TEST thread 00:28:09.158 ************************************ 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:28:09.158 * Looking for test storage... 00:28:09.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:09.158 05:39:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.158 05:39:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.158 05:39:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.158 05:39:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.158 05:39:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.158 05:39:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.158 05:39:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.158 05:39:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.158 05:39:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.158 05:39:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.158 05:39:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.158 05:39:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:28:09.158 05:39:28 thread -- scripts/common.sh@345 -- # : 1 00:28:09.158 05:39:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.158 05:39:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.158 05:39:28 thread -- scripts/common.sh@365 -- # decimal 1 00:28:09.158 05:39:28 thread -- scripts/common.sh@353 -- # local d=1 00:28:09.158 05:39:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.158 05:39:28 thread -- scripts/common.sh@355 -- # echo 1 00:28:09.158 05:39:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.158 05:39:28 thread -- scripts/common.sh@366 -- # decimal 2 00:28:09.158 05:39:28 thread -- scripts/common.sh@353 -- # local d=2 00:28:09.158 05:39:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.158 05:39:28 thread -- scripts/common.sh@355 -- # echo 2 00:28:09.158 05:39:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.158 05:39:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.158 05:39:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.158 05:39:28 thread -- scripts/common.sh@368 -- # return 0 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.158 05:39:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.159 --rc genhtml_branch_coverage=1 00:28:09.159 --rc genhtml_function_coverage=1 00:28:09.159 --rc genhtml_legend=1 00:28:09.159 --rc geninfo_all_blocks=1 00:28:09.159 --rc geninfo_unexecuted_blocks=1 00:28:09.159 00:28:09.159 ' 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.159 --rc genhtml_branch_coverage=1 00:28:09.159 --rc genhtml_function_coverage=1 00:28:09.159 --rc genhtml_legend=1 00:28:09.159 --rc geninfo_all_blocks=1 00:28:09.159 --rc geninfo_unexecuted_blocks=1 00:28:09.159 00:28:09.159 ' 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.159 --rc genhtml_branch_coverage=1 00:28:09.159 --rc genhtml_function_coverage=1 00:28:09.159 --rc genhtml_legend=1 00:28:09.159 --rc geninfo_all_blocks=1 00:28:09.159 --rc geninfo_unexecuted_blocks=1 00:28:09.159 00:28:09.159 ' 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:09.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.159 --rc genhtml_branch_coverage=1 00:28:09.159 --rc genhtml_function_coverage=1 00:28:09.159 --rc genhtml_legend=1 00:28:09.159 --rc geninfo_all_blocks=1 00:28:09.159 --rc geninfo_unexecuted_blocks=1 00:28:09.159 00:28:09.159 ' 00:28:09.159 05:39:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:09.159 05:39:28 thread -- common/autotest_common.sh@10 -- # set +x 00:28:09.159 ************************************ 00:28:09.159 START TEST thread_poller_perf 00:28:09.159 ************************************ 00:28:09.159 05:39:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:28:09.159 [2024-11-20 05:39:28.967438] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:09.159 [2024-11-20 05:39:28.967657] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61093 ] 00:28:09.416 [2024-11-20 05:39:29.151666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.416 [2024-11-20 05:39:29.308014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.416 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:28:10.815 [2024-11-20T05:39:30.734Z] ====================================== 00:28:10.815 [2024-11-20T05:39:30.734Z] busy:2301870850 (cyc) 00:28:10.815 [2024-11-20T05:39:30.734Z] total_run_count: 350000 00:28:10.815 [2024-11-20T05:39:30.734Z] tsc_hz: 2290000000 (cyc) 00:28:10.815 [2024-11-20T05:39:30.734Z] ====================================== 00:28:10.815 [2024-11-20T05:39:30.734Z] poller_cost: 6576 (cyc), 2871 (nsec) 00:28:10.815 00:28:10.815 real 0m1.651s 00:28:10.815 user 0m1.446s 00:28:10.815 sys 0m0.098s 00:28:10.815 ************************************ 00:28:10.815 END TEST thread_poller_perf 00:28:10.815 ************************************ 00:28:10.815 05:39:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:10.815 05:39:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:10.815 05:39:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:10.815 05:39:30 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:28:10.815 05:39:30 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:10.815 05:39:30 thread -- common/autotest_common.sh@10 -- # set +x 00:28:10.815 ************************************ 00:28:10.815 START TEST thread_poller_perf 00:28:10.815 ************************************ 00:28:10.815 05:39:30 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:28:10.815 [2024-11-20 05:39:30.699889] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:10.815 [2024-11-20 05:39:30.700100] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61135 ] 00:28:11.085 [2024-11-20 05:39:30.883093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.343 [2024-11-20 05:39:31.035355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.343 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:28:12.722 [2024-11-20T05:39:32.641Z] ====================================== 00:28:12.722 [2024-11-20T05:39:32.642Z] busy:2294111182 (cyc) 00:28:12.723 [2024-11-20T05:39:32.642Z] total_run_count: 4784000 00:28:12.723 [2024-11-20T05:39:32.642Z] tsc_hz: 2290000000 (cyc) 00:28:12.723 [2024-11-20T05:39:32.642Z] ====================================== 00:28:12.723 [2024-11-20T05:39:32.642Z] poller_cost: 479 (cyc), 209 (nsec) 00:28:12.723 00:28:12.723 real 0m1.634s 00:28:12.723 user 0m1.409s 00:28:12.723 sys 0m0.117s 00:28:12.723 05:39:32 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:12.723 05:39:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:28:12.723 ************************************ 00:28:12.723 END TEST thread_poller_perf 00:28:12.723 ************************************ 00:28:12.723 05:39:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:28:12.723 ************************************ 00:28:12.723 END TEST thread 00:28:12.723 ************************************ 00:28:12.723 00:28:12.723 real 0m3.644s 00:28:12.723 user 0m3.016s 00:28:12.723 sys 0m0.426s 00:28:12.723 05:39:32 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:12.723 05:39:32 thread -- common/autotest_common.sh@10 -- # set +x 00:28:12.723 05:39:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:28:12.723 05:39:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:12.723 05:39:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:12.723 05:39:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:12.723 05:39:32 -- common/autotest_common.sh@10 -- # set +x 00:28:12.723 ************************************ 00:28:12.723 START TEST app_cmdline 00:28:12.723 ************************************ 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:28:12.723 * Looking for test storage... 00:28:12.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.723 05:39:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.723 --rc genhtml_branch_coverage=1 00:28:12.723 --rc genhtml_function_coverage=1 00:28:12.723 --rc genhtml_legend=1 00:28:12.723 --rc geninfo_all_blocks=1 00:28:12.723 --rc geninfo_unexecuted_blocks=1 00:28:12.723 00:28:12.723 ' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.723 --rc genhtml_branch_coverage=1 00:28:12.723 --rc genhtml_function_coverage=1 00:28:12.723 --rc genhtml_legend=1 00:28:12.723 --rc geninfo_all_blocks=1 00:28:12.723 --rc geninfo_unexecuted_blocks=1 00:28:12.723 00:28:12.723 ' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.723 --rc genhtml_branch_coverage=1 00:28:12.723 --rc genhtml_function_coverage=1 00:28:12.723 --rc genhtml_legend=1 00:28:12.723 --rc geninfo_all_blocks=1 00:28:12.723 --rc geninfo_unexecuted_blocks=1 00:28:12.723 00:28:12.723 ' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.723 --rc genhtml_branch_coverage=1 00:28:12.723 --rc genhtml_function_coverage=1 00:28:12.723 --rc genhtml_legend=1 00:28:12.723 --rc geninfo_all_blocks=1 00:28:12.723 --rc geninfo_unexecuted_blocks=1 00:28:12.723 00:28:12.723 ' 00:28:12.723 05:39:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:28:12.723 05:39:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61224 00:28:12.723 05:39:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:28:12.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.723 05:39:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61224 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 61224 ']' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:12.723 05:39:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:12.981 [2024-11-20 05:39:32.735052] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:12.981 [2024-11-20 05:39:32.735760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61224 ] 00:28:13.238 [2024-11-20 05:39:32.916444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.238 [2024-11-20 05:39:33.056064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.171 05:39:34 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:14.171 05:39:34 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:28:14.171 05:39:34 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:28:14.428 { 00:28:14.428 "version": "SPDK v25.01-pre git sha1 57b682926", 00:28:14.428 "fields": { 00:28:14.428 "major": 25, 00:28:14.428 "minor": 1, 00:28:14.428 "patch": 0, 00:28:14.428 "suffix": "-pre", 00:28:14.428 "commit": "57b682926" 00:28:14.428 } 00:28:14.428 } 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:28:14.428 05:39:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.428 05:39:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:28:14.428 05:39:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:14.428 05:39:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.686 05:39:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:28:14.686 05:39:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:28:14.686 05:39:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.686 05:39:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:28:14.687 request: 00:28:14.687 { 00:28:14.687 "method": "env_dpdk_get_mem_stats", 00:28:14.687 "req_id": 1 00:28:14.687 } 00:28:14.687 Got JSON-RPC error response 00:28:14.687 response: 00:28:14.687 { 00:28:14.687 "code": -32601, 00:28:14.687 "message": "Method not found" 00:28:14.687 } 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:14.687 05:39:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61224 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 61224 ']' 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 61224 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:14.687 05:39:34 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61224 00:28:14.945 05:39:34 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:14.945 05:39:34 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:14.945 05:39:34 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61224' 00:28:14.945 killing process with pid 61224 00:28:14.945 05:39:34 app_cmdline -- common/autotest_common.sh@971 -- # kill 61224 00:28:14.945 05:39:34 app_cmdline -- common/autotest_common.sh@976 -- # wait 61224 00:28:17.477 00:28:17.477 real 0m4.902s 00:28:17.477 user 0m4.960s 00:28:17.477 sys 0m0.841s 00:28:17.477 05:39:37 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.477 05:39:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:28:17.477 ************************************ 00:28:17.477 END TEST app_cmdline 00:28:17.477 ************************************ 00:28:17.477 05:39:37 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:17.477 05:39:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:17.477 05:39:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:17.477 05:39:37 -- common/autotest_common.sh@10 -- # set +x 00:28:17.477 ************************************ 00:28:17.477 START TEST version 00:28:17.477 ************************************ 00:28:17.477 05:39:37 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:28:17.736 * Looking for test storage... 00:28:17.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1691 -- # lcov --version 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:17.736 05:39:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.736 05:39:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.736 05:39:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.736 05:39:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.736 05:39:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.736 05:39:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.736 05:39:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.736 05:39:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.736 05:39:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.736 05:39:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.736 05:39:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.736 05:39:37 version -- scripts/common.sh@344 -- # case "$op" in 00:28:17.736 05:39:37 version -- scripts/common.sh@345 -- # : 1 00:28:17.736 05:39:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.736 05:39:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.736 05:39:37 version -- scripts/common.sh@365 -- # decimal 1 00:28:17.736 05:39:37 version -- scripts/common.sh@353 -- # local d=1 00:28:17.736 05:39:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.736 05:39:37 version -- scripts/common.sh@355 -- # echo 1 00:28:17.736 05:39:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.736 05:39:37 version -- scripts/common.sh@366 -- # decimal 2 00:28:17.736 05:39:37 version -- scripts/common.sh@353 -- # local d=2 00:28:17.736 05:39:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.736 05:39:37 version -- scripts/common.sh@355 -- # echo 2 00:28:17.736 05:39:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.736 05:39:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.736 05:39:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.736 05:39:37 version -- scripts/common.sh@368 -- # return 0 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:17.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.736 --rc genhtml_branch_coverage=1 00:28:17.736 --rc genhtml_function_coverage=1 00:28:17.736 --rc genhtml_legend=1 00:28:17.736 --rc geninfo_all_blocks=1 00:28:17.736 --rc geninfo_unexecuted_blocks=1 00:28:17.736 00:28:17.736 ' 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:17.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.736 --rc genhtml_branch_coverage=1 00:28:17.736 --rc genhtml_function_coverage=1 00:28:17.736 --rc genhtml_legend=1 00:28:17.736 --rc geninfo_all_blocks=1 00:28:17.736 --rc geninfo_unexecuted_blocks=1 00:28:17.736 00:28:17.736 ' 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:17.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.736 --rc genhtml_branch_coverage=1 00:28:17.736 --rc genhtml_function_coverage=1 00:28:17.736 --rc genhtml_legend=1 00:28:17.736 --rc geninfo_all_blocks=1 00:28:17.736 --rc geninfo_unexecuted_blocks=1 00:28:17.736 00:28:17.736 ' 00:28:17.736 05:39:37 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:17.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.736 --rc genhtml_branch_coverage=1 00:28:17.736 --rc genhtml_function_coverage=1 00:28:17.736 --rc genhtml_legend=1 00:28:17.736 --rc geninfo_all_blocks=1 00:28:17.736 --rc geninfo_unexecuted_blocks=1 00:28:17.736 00:28:17.736 ' 00:28:17.736 05:39:37 version -- app/version.sh@17 -- # get_header_version major 00:28:17.736 05:39:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # cut -f2 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # tr -d '"' 00:28:17.736 05:39:37 version -- app/version.sh@17 -- # major=25 00:28:17.736 05:39:37 version -- app/version.sh@18 -- # get_header_version minor 00:28:17.736 05:39:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # cut -f2 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # tr -d '"' 00:28:17.736 05:39:37 version -- app/version.sh@18 -- # minor=1 00:28:17.736 05:39:37 version -- app/version.sh@19 -- # get_header_version patch 00:28:17.736 05:39:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # cut -f2 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # tr -d '"' 00:28:17.736 05:39:37 version -- app/version.sh@19 -- # patch=0 00:28:17.736 05:39:37 version -- app/version.sh@20 -- # get_header_version suffix 00:28:17.736 05:39:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # cut -f2 00:28:17.736 05:39:37 version -- app/version.sh@14 -- # tr -d '"' 00:28:17.736 05:39:37 version -- app/version.sh@20 -- # suffix=-pre 00:28:17.736 05:39:37 version -- app/version.sh@22 -- # version=25.1 00:28:17.736 05:39:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:28:17.736 05:39:37 version -- app/version.sh@28 -- # version=25.1rc0 00:28:17.736 05:39:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:28:17.736 05:39:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:28:17.995 05:39:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:28:17.995 05:39:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:28:17.995 ************************************ 00:28:17.995 END TEST version 00:28:17.995 ************************************ 00:28:17.995 00:28:17.995 real 0m0.321s 00:28:17.995 user 0m0.192s 00:28:17.995 sys 0m0.186s 00:28:17.995 05:39:37 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:17.995 05:39:37 version -- common/autotest_common.sh@10 -- # set +x 00:28:17.995 05:39:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:28:17.995 05:39:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:28:17.995 05:39:37 -- spdk/autotest.sh@194 -- # uname -s 00:28:17.995 05:39:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:28:17.995 05:39:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:17.995 05:39:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:28:17.995 05:39:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:28:17.995 05:39:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:17.995 05:39:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:17.995 05:39:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:17.995 05:39:37 -- common/autotest_common.sh@10 -- # set +x 00:28:17.995 ************************************ 00:28:17.995 START TEST blockdev_nvme 00:28:17.995 ************************************ 00:28:17.995 05:39:37 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:17.995 * Looking for test storage... 00:28:17.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:17.995 05:39:37 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:17.995 05:39:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:28:17.995 05:39:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:18.253 05:39:37 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.253 05:39:37 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.254 05:39:37 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:28:18.254 05:39:37 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.254 05:39:37 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:18.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.254 --rc genhtml_branch_coverage=1 00:28:18.254 --rc genhtml_function_coverage=1 00:28:18.254 --rc genhtml_legend=1 00:28:18.254 --rc geninfo_all_blocks=1 00:28:18.254 --rc geninfo_unexecuted_blocks=1 00:28:18.254 00:28:18.254 ' 00:28:18.254 05:39:37 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:18.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.254 --rc genhtml_branch_coverage=1 00:28:18.254 --rc genhtml_function_coverage=1 00:28:18.254 --rc genhtml_legend=1 00:28:18.254 --rc geninfo_all_blocks=1 00:28:18.254 --rc geninfo_unexecuted_blocks=1 00:28:18.254 00:28:18.254 ' 00:28:18.254 05:39:37 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:18.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.254 --rc genhtml_branch_coverage=1 00:28:18.254 --rc genhtml_function_coverage=1 00:28:18.254 --rc genhtml_legend=1 00:28:18.254 --rc geninfo_all_blocks=1 00:28:18.254 --rc geninfo_unexecuted_blocks=1 00:28:18.254 00:28:18.254 ' 00:28:18.254 05:39:37 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:18.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.254 --rc genhtml_branch_coverage=1 00:28:18.254 --rc genhtml_function_coverage=1 00:28:18.254 --rc genhtml_legend=1 00:28:18.254 --rc geninfo_all_blocks=1 00:28:18.254 --rc geninfo_unexecuted_blocks=1 00:28:18.254 00:28:18.254 ' 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:18.254 05:39:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:28:18.254 05:39:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61418 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:18.254 05:39:38 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61418 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 61418 ']' 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.254 05:39:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:18.254 [2024-11-20 05:39:38.119656] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:18.254 [2024-11-20 05:39:38.119949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61418 ] 00:28:18.513 [2024-11-20 05:39:38.302316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.771 [2024-11-20 05:39:38.444402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.708 05:39:39 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.708 05:39:39 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:19.708 05:39:39 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:28:19.708 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.708 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.970 05:39:39 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.970 05:39:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:28:19.970 05:39:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.970 05:39:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.970 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.229 05:39:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.229 05:39:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:28:20.229 05:39:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:28:20.229 05:39:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:20.229 05:39:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d05bc8e7-53e0-4616-8962-141f70751c28"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d05bc8e7-53e0-4616-8962-141f70751c28",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1119b325-828e-4d2b-8cb3-ef6b54a6164e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1119b325-828e-4d2b-8cb3-ef6b54a6164e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a15c7311-626a-44cd-aee6-2271367ee6e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a15c7311-626a-44cd-aee6-2271367ee6e7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "895eca94-d38b-4c7a-bbba-9e032d699882"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "895eca94-d38b-4c7a-bbba-9e032d699882",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e3ea879e-a03d-433b-88d7-7f3190ccbe16"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e3ea879e-a03d-433b-88d7-7f3190ccbe16",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fd36fc1b-58bd-4c73-8438-f6ab424fe6ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fd36fc1b-58bd-4c73-8438-f6ab424fe6ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:28:20.229 05:39:40 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61418 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 61418 ']' 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 61418 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61418 00:28:20.229 killing process with pid 61418 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61418' 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 61418 00:28:20.229 05:39:40 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 61418 00:28:23.518 05:39:42 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.518 05:39:42 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:23.518 05:39:42 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:28:23.518 05:39:42 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:23.518 05:39:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:23.518 ************************************ 00:28:23.518 START TEST bdev_hello_world 00:28:23.518 ************************************ 00:28:23.518 05:39:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:23.518 [2024-11-20 05:39:42.850174] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:23.518 [2024-11-20 05:39:42.850416] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:28:23.518 [2024-11-20 05:39:43.015208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.518 [2024-11-20 05:39:43.153045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.105 [2024-11-20 05:39:43.861283] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:24.105 [2024-11-20 05:39:43.861462] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:24.105 [2024-11-20 05:39:43.861503] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:24.105 [2024-11-20 05:39:43.864536] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:24.105 [2024-11-20 05:39:43.865206] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:24.105 [2024-11-20 05:39:43.865284] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:24.105 [2024-11-20 05:39:43.865533] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:24.105 00:28:24.105 [2024-11-20 05:39:43.865593] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:25.493 00:28:25.493 real 0m2.356s 00:28:25.493 user 0m1.917s 00:28:25.493 sys 0m0.331s 00:28:25.493 05:39:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.493 ************************************ 00:28:25.493 END TEST bdev_hello_world 00:28:25.493 ************************************ 00:28:25.493 05:39:45 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:28:25.493 05:39:45 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:28:25.493 05:39:45 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:25.493 05:39:45 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.493 05:39:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:25.493 ************************************ 00:28:25.493 START TEST bdev_bounds 00:28:25.493 ************************************ 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61561 00:28:25.493 Process bdevio pid: 61561 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61561' 00:28:25.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61561 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61561 ']' 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:25.493 05:39:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:25.493 [2024-11-20 05:39:45.279872] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:25.493 [2024-11-20 05:39:45.280004] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61561 ] 00:28:25.753 [2024-11-20 05:39:45.459608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:25.753 [2024-11-20 05:39:45.606926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.753 [2024-11-20 05:39:45.607070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.753 [2024-11-20 05:39:45.607108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.691 05:39:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:26.691 05:39:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:28:26.691 05:39:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:26.691 I/O targets: 00:28:26.691 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:28:26.691 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:28:26.691 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:26.691 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:26.691 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:28:26.691 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:28:26.691 00:28:26.691 00:28:26.691 CUnit - A unit testing framework for C - Version 2.1-3 00:28:26.691 http://cunit.sourceforge.net/ 00:28:26.691 00:28:26.691 00:28:26.691 Suite: bdevio tests on: Nvme3n1 00:28:26.691 Test: blockdev write read block ...passed 00:28:26.691 Test: blockdev write zeroes read block ...passed 00:28:26.691 Test: blockdev write zeroes read no split ...passed 00:28:26.691 Test: blockdev write zeroes read split ...passed 00:28:26.691 Test: blockdev write zeroes read split partial ...passed 00:28:26.691 Test: blockdev reset ...[2024-11-20 05:39:46.559933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:28:26.691 passed 00:28:26.691 Test: blockdev write read 8 blocks ...[2024-11-20 05:39:46.563881] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:28:26.691 passed 00:28:26.691 Test: blockdev write read size > 128k ...passed 00:28:26.691 Test: blockdev write read invalid size ...passed 00:28:26.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:26.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:26.691 Test: blockdev write read max offset ...passed 00:28:26.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:26.691 Test: blockdev writev readv 8 blocks ...passed 00:28:26.691 Test: blockdev writev readv 30 x 1block ...passed 00:28:26.691 Test: blockdev writev readv block ...passed 00:28:26.691 Test: blockdev writev readv size > 128k ...passed 00:28:26.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:26.691 Test: blockdev comparev and writev ...[2024-11-20 05:39:46.572097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b320a000 len:0x1000 00:28:26.691 [2024-11-20 05:39:46.572147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:26.691 passed 00:28:26.691 Test: blockdev nvme passthru rw ...passed 00:28:26.691 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:46.572920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:28:26.691 Test: blockdev nvme admin passthru ...RP2 0x0 00:28:26.691 [2024-11-20 05:39:46.573042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:26.691 passed 00:28:26.691 Test: blockdev copy ...passed 00:28:26.691 Suite: bdevio tests on: Nvme2n3 00:28:26.691 Test: blockdev write read block ...passed 00:28:26.691 Test: blockdev write zeroes read block ...passed 00:28:26.691 Test: blockdev write zeroes read no split ...passed 00:28:26.951 Test: blockdev write zeroes read split ...passed 00:28:26.951 Test: blockdev write zeroes read split partial ...passed 00:28:26.951 Test: blockdev reset ...[2024-11-20 05:39:46.657532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:26.951 [2024-11-20 05:39:46.662031] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:28:26.951 passed 00:28:26.951 Test: blockdev write read 8 blocks ...passed 00:28:26.951 Test: blockdev write read size > 128k ...passed 00:28:26.951 Test: blockdev write read invalid size ...passed 00:28:26.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:26.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:26.951 Test: blockdev write read max offset ...passed 00:28:26.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:26.951 Test: blockdev writev readv 8 blocks ...passed 00:28:26.951 Test: blockdev writev readv 30 x 1block ...passed 00:28:26.951 Test: blockdev writev readv block ...passed 00:28:26.951 Test: blockdev writev readv size > 128k ...passed 00:28:26.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:26.951 Test: blockdev comparev and writev ...[2024-11-20 05:39:46.671112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295c06000 len:0x1000 00:28:26.951 [2024-11-20 05:39:46.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:26.951 passed 00:28:26.951 Test: blockdev nvme passthru rw ...passed 00:28:26.951 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:46.672175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:26.951 [2024-11-20 05:39:46.672286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:26.951 passed 00:28:26.951 Test: blockdev nvme admin passthru ...passed 00:28:26.951 Test: blockdev copy ...passed 00:28:26.951 Suite: bdevio tests on: Nvme2n2 00:28:26.951 Test: blockdev write read block ...passed 00:28:26.951 Test: blockdev write zeroes read block ...passed 00:28:26.951 Test: blockdev write zeroes read no split ...passed 00:28:26.951 Test: blockdev write zeroes read split ...passed 00:28:26.951 Test: blockdev write zeroes read split partial ...passed 00:28:26.951 Test: blockdev reset ...[2024-11-20 05:39:46.758504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:26.951 [2024-11-20 05:39:46.762965] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:28:26.951 passed 00:28:26.951 Test: blockdev write read 8 blocks ...passed 00:28:26.951 Test: blockdev write read size > 128k ...passed 00:28:26.951 Test: blockdev write read invalid size ...passed 00:28:26.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:26.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:26.951 Test: blockdev write read max offset ...passed 00:28:26.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:26.951 Test: blockdev writev readv 8 blocks ...passed 00:28:26.951 Test: blockdev writev readv 30 x 1block ...passed 00:28:26.951 Test: blockdev writev readv block ...passed 00:28:26.951 Test: blockdev writev readv size > 128k ...passed 00:28:26.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:26.951 Test: blockdev comparev and writev ...[2024-11-20 05:39:46.771892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c323c000 len:0x1000 00:28:26.951 [2024-11-20 05:39:46.772044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:26.951 passed 00:28:26.951 Test: blockdev nvme passthru rw ...passed 00:28:26.951 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:46.773002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:26.951 [2024-11-20 05:39:46.773120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:26.951 passed 00:28:26.951 Test: blockdev nvme admin passthru ...passed 00:28:26.951 Test: blockdev copy ...passed 00:28:26.951 Suite: bdevio tests on: Nvme2n1 00:28:26.951 Test: blockdev write read block ...passed 00:28:26.951 Test: blockdev write zeroes read block ...passed 00:28:26.951 Test: blockdev write zeroes read no split ...passed 00:28:26.951 Test: blockdev write zeroes read split ...passed 00:28:26.951 Test: blockdev write zeroes read split partial ...passed 00:28:26.951 Test: blockdev reset ...[2024-11-20 05:39:46.860310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:28:26.952 [2024-11-20 05:39:46.864759] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:28:26.952 passed 00:28:26.952 Test: blockdev write read 8 blocks ...passed 00:28:26.952 Test: blockdev write read size > 128k ...passed 00:28:26.952 Test: blockdev write read invalid size ...passed 00:28:26.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:26.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:26.952 Test: blockdev write read max offset ...passed 00:28:26.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:26.952 Test: blockdev writev readv 8 blocks ...passed 00:28:27.211 Test: blockdev writev readv 30 x 1block ...passed 00:28:27.211 Test: blockdev writev readv block ...passed 00:28:27.211 Test: blockdev writev readv size > 128k ...passed 00:28:27.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:27.211 Test: blockdev comparev and writev ...[2024-11-20 05:39:46.873465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3238000 len:0x1000 00:28:27.211 [2024-11-20 05:39:46.873520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:27.211 passed 00:28:27.211 Test: blockdev nvme passthru rw ...passed 00:28:27.211 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:46.874283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:28:27.211 Test: blockdev nvme admin passthru ...RP2 0x0 00:28:27.211 [2024-11-20 05:39:46.874388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:27.211 passed 00:28:27.211 Test: blockdev copy ...passed 00:28:27.211 Suite: bdevio tests on: Nvme1n1 00:28:27.211 Test: blockdev write read block ...passed 00:28:27.211 Test: blockdev write zeroes read block ...passed 00:28:27.212 Test: blockdev write zeroes read no split ...passed 00:28:27.212 Test: blockdev write zeroes read split ...passed 00:28:27.212 Test: blockdev write zeroes read split partial ...passed 00:28:27.212 Test: blockdev reset ...[2024-11-20 05:39:46.958553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:28:27.212 [2024-11-20 05:39:46.962852] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:28:27.212 passed 00:28:27.212 Test: blockdev write read 8 blocks ...passed 00:28:27.212 Test: blockdev write read size > 128k ...passed 00:28:27.212 Test: blockdev write read invalid size ...passed 00:28:27.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:27.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:27.212 Test: blockdev write read max offset ...passed 00:28:27.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:27.212 Test: blockdev writev readv 8 blocks ...passed 00:28:27.212 Test: blockdev writev readv 30 x 1block ...passed 00:28:27.212 Test: blockdev writev readv block ...passed 00:28:27.212 Test: blockdev writev readv size > 128k ...passed 00:28:27.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:27.212 Test: blockdev comparev and writev ...[2024-11-20 05:39:46.971596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3234000 len:0x1000 00:28:27.212 [2024-11-20 05:39:46.971724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:27.212 passed 00:28:27.212 Test: blockdev nvme passthru rw ...passed 00:28:27.212 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:46.972628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:28:27.212 [2024-11-20 05:39:46.972734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:28:27.212 passed 00:28:27.212 Test: blockdev nvme admin passthru ...passed 00:28:27.212 Test: blockdev copy ...passed 00:28:27.212 Suite: bdevio tests on: Nvme0n1 00:28:27.212 Test: blockdev write read block ...passed 00:28:27.212 Test: blockdev write zeroes read block ...passed 00:28:27.212 Test: blockdev write zeroes read no split ...passed 00:28:27.212 Test: blockdev write zeroes read split ...passed 00:28:27.212 Test: blockdev write zeroes read split partial ...passed 00:28:27.212 Test: blockdev reset ...[2024-11-20 05:39:47.060871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:28:27.212 [2024-11-20 05:39:47.065203] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:28:27.212 passed 00:28:27.212 Test: blockdev write read 8 blocks ...passed 00:28:27.212 Test: blockdev write read size > 128k ...passed 00:28:27.212 Test: blockdev write read invalid size ...passed 00:28:27.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:27.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:27.212 Test: blockdev write read max offset ...passed 00:28:27.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:27.212 Test: blockdev writev readv 8 blocks ...passed 00:28:27.212 Test: blockdev writev readv 30 x 1block ...passed 00:28:27.212 Test: blockdev writev readv block ...passed 00:28:27.212 Test: blockdev writev readv size > 128k ...passed 00:28:27.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:27.212 Test: blockdev comparev and writev ...[2024-11-20 05:39:47.073775] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:28:27.212 separate metadata which is not supported yet. 00:28:27.212 passed 00:28:27.212 Test: blockdev nvme passthru rw ...passed 00:28:27.212 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:39:47.074461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:28:27.212 [2024-11-20 05:39:47.074605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:28:27.212 passed 00:28:27.212 Test: blockdev nvme admin passthru ...passed 00:28:27.212 Test: blockdev copy ...passed 00:28:27.212 00:28:27.212 Run Summary: Type Total Ran Passed Failed Inactive 00:28:27.212 suites 6 6 n/a 0 0 00:28:27.212 tests 138 138 138 0 0 00:28:27.212 asserts 893 893 893 0 n/a 00:28:27.212 00:28:27.212 Elapsed time = 1.627 seconds 00:28:27.212 0 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61561 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61561 ']' 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61561 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:27.212 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61561 00:28:27.471 killing process with pid 61561 00:28:27.471 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:27.471 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:27.471 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61561' 00:28:27.471 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61561 00:28:27.471 05:39:47 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61561 00:28:28.862 ************************************ 00:28:28.862 END TEST bdev_bounds 00:28:28.862 ************************************ 00:28:28.862 05:39:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:28:28.862 00:28:28.862 real 0m3.186s 00:28:28.862 user 0m8.093s 00:28:28.862 sys 0m0.515s 00:28:28.862 05:39:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:28.862 05:39:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:28:28.862 05:39:48 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:28.862 05:39:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:28:28.862 05:39:48 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:28.862 05:39:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:28.862 ************************************ 00:28:28.862 START TEST bdev_nbd 00:28:28.862 ************************************ 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61630 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61630 /var/tmp/spdk-nbd.sock 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61630 ']' 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:28.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.862 05:39:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:28.862 [2024-11-20 05:39:48.539094] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:28.862 [2024-11-20 05:39:48.539342] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.862 [2024-11-20 05:39:48.722522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.136 [2024-11-20 05:39:48.867826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.083 1+0 records in 00:28:30.083 1+0 records out 00:28:30.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701201 s, 5.8 MB/s 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:30.083 05:39:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:30.341 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.342 1+0 records in 00:28:30.342 1+0 records out 00:28:30.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595278 s, 6.9 MB/s 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:30.342 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.600 1+0 records in 00:28:30.600 1+0 records out 00:28:30.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703673 s, 5.8 MB/s 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:30.600 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.601 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:30.601 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:30.601 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:30.601 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:30.601 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:30.860 1+0 records in 00:28:30.860 1+0 records out 00:28:30.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000725368 s, 5.6 MB/s 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:30.860 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:31.119 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.120 1+0 records in 00:28:31.120 1+0 records out 00:28:31.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480271 s, 8.5 MB/s 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:31.120 05:39:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.379 1+0 records in 00:28:31.379 1+0 records out 00:28:31.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070624 s, 5.8 MB/s 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:28:31.379 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd0", 00:28:31.639 "bdev_name": "Nvme0n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd1", 00:28:31.639 "bdev_name": "Nvme1n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd2", 00:28:31.639 "bdev_name": "Nvme2n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd3", 00:28:31.639 "bdev_name": "Nvme2n2" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd4", 00:28:31.639 "bdev_name": "Nvme2n3" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd5", 00:28:31.639 "bdev_name": "Nvme3n1" 00:28:31.639 } 00:28:31.639 ]' 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd0", 00:28:31.639 "bdev_name": "Nvme0n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd1", 00:28:31.639 "bdev_name": "Nvme1n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd2", 00:28:31.639 "bdev_name": "Nvme2n1" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd3", 00:28:31.639 "bdev_name": "Nvme2n2" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd4", 00:28:31.639 "bdev_name": "Nvme2n3" 00:28:31.639 }, 00:28:31.639 { 00:28:31.639 "nbd_device": "/dev/nbd5", 00:28:31.639 "bdev_name": "Nvme3n1" 00:28:31.639 } 00:28:31.639 ]' 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.639 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:28:31.640 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:31.640 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:31.640 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.640 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.900 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:32.160 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:32.160 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:32.160 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:32.160 05:39:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.160 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:28:32.419 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:28:32.419 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.420 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.679 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.939 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.199 05:39:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:33.459 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:28:33.719 /dev/nbd0 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.719 1+0 records in 00:28:33.719 1+0 records out 00:28:33.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416858 s, 9.8 MB/s 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:33.719 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:28:33.990 /dev/nbd1 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.990 1+0 records in 00:28:33.990 1+0 records out 00:28:33.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531236 s, 7.7 MB/s 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:33.990 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:28:34.263 /dev/nbd10 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:34.263 1+0 records in 00:28:34.263 1+0 records out 00:28:34.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788348 s, 5.2 MB/s 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:34.263 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.264 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:34.264 05:39:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:34.264 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:34.264 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:34.264 05:39:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:28:34.523 /dev/nbd11 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:34.523 1+0 records in 00:28:34.523 1+0 records out 00:28:34.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647535 s, 6.3 MB/s 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:34.523 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:28:34.783 /dev/nbd12 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:34.783 1+0 records in 00:28:34.783 1+0 records out 00:28:34.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497071 s, 8.2 MB/s 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:34.783 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:28:35.043 /dev/nbd13 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:35.043 1+0 records in 00:28:35.043 1+0 records out 00:28:35.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072916 s, 5.6 MB/s 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:35.043 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:35.303 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:35.303 { 00:28:35.303 "nbd_device": "/dev/nbd0", 00:28:35.303 "bdev_name": "Nvme0n1" 00:28:35.303 }, 00:28:35.303 { 00:28:35.303 "nbd_device": "/dev/nbd1", 00:28:35.304 "bdev_name": "Nvme1n1" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd10", 00:28:35.304 "bdev_name": "Nvme2n1" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd11", 00:28:35.304 "bdev_name": "Nvme2n2" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd12", 00:28:35.304 "bdev_name": "Nvme2n3" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd13", 00:28:35.304 "bdev_name": "Nvme3n1" 00:28:35.304 } 00:28:35.304 ]' 00:28:35.304 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:35.304 05:39:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd0", 00:28:35.304 "bdev_name": "Nvme0n1" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd1", 00:28:35.304 "bdev_name": "Nvme1n1" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd10", 00:28:35.304 "bdev_name": "Nvme2n1" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd11", 00:28:35.304 "bdev_name": "Nvme2n2" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd12", 00:28:35.304 "bdev_name": "Nvme2n3" 00:28:35.304 }, 00:28:35.304 { 00:28:35.304 "nbd_device": "/dev/nbd13", 00:28:35.304 "bdev_name": "Nvme3n1" 00:28:35.304 } 00:28:35.304 ]' 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:35.304 /dev/nbd1 00:28:35.304 /dev/nbd10 00:28:35.304 /dev/nbd11 00:28:35.304 /dev/nbd12 00:28:35.304 /dev/nbd13' 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:35.304 /dev/nbd1 00:28:35.304 /dev/nbd10 00:28:35.304 /dev/nbd11 00:28:35.304 /dev/nbd12 00:28:35.304 /dev/nbd13' 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:35.304 256+0 records in 00:28:35.304 256+0 records out 00:28:35.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128306 s, 81.7 MB/s 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:35.304 256+0 records in 00:28:35.304 256+0 records out 00:28:35.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0973048 s, 10.8 MB/s 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.304 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:35.564 256+0 records in 00:28:35.564 256+0 records out 00:28:35.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100695 s, 10.4 MB/s 00:28:35.564 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.564 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:28:35.564 256+0 records in 00:28:35.564 256+0 records out 00:28:35.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100781 s, 10.4 MB/s 00:28:35.564 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.564 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:28:35.823 256+0 records in 00:28:35.823 256+0 records out 00:28:35.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102007 s, 10.3 MB/s 00:28:35.823 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.823 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:28:35.823 256+0 records in 00:28:35.823 256+0 records out 00:28:35.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103883 s, 10.1 MB/s 00:28:35.823 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:28:35.824 256+0 records in 00:28:35.824 256+0 records out 00:28:35.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102164 s, 10.3 MB/s 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:35.824 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.083 05:39:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:36.343 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.604 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:36.864 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.124 05:39:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:37.384 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:28:37.644 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:37.905 malloc_lvol_verify 00:28:37.905 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:38.164 1781120c-2851-4bef-bab3-953555c39597 00:28:38.164 05:39:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:38.423 6e65a8e2-8512-4d07-92d8-5db05058f0a3 00:28:38.423 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:38.683 /dev/nbd0 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:28:38.683 mke2fs 1.47.0 (5-Feb-2023) 00:28:38.683 Discarding device blocks: 0/4096 done 00:28:38.683 Creating filesystem with 4096 1k blocks and 1024 inodes 00:28:38.683 00:28:38.683 Allocating group tables: 0/1 done 00:28:38.683 Writing inode tables: 0/1 done 00:28:38.683 Creating journal (1024 blocks): done 00:28:38.683 Writing superblocks and filesystem accounting information: 0/1 done 00:28:38.683 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:38.683 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61630 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61630 ']' 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61630 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61630 00:28:38.942 killing process with pid 61630 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61630' 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61630 00:28:38.942 05:39:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61630 00:28:40.325 05:40:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:28:40.325 00:28:40.325 real 0m11.620s 00:28:40.325 user 0m15.483s 00:28:40.325 sys 0m4.418s 00:28:40.325 05:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:40.325 05:40:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:28:40.325 ************************************ 00:28:40.325 END TEST bdev_nbd 00:28:40.325 ************************************ 00:28:40.325 05:40:00 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:28:40.325 05:40:00 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:28:40.325 skipping fio tests on NVMe due to multi-ns failures. 00:28:40.325 05:40:00 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:40.325 05:40:00 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:40.325 05:40:00 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:40.325 05:40:00 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:28:40.325 05:40:00 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:40.325 05:40:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:40.325 ************************************ 00:28:40.325 START TEST bdev_verify 00:28:40.325 ************************************ 00:28:40.325 05:40:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:40.325 [2024-11-20 05:40:00.220136] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:40.325 [2024-11-20 05:40:00.220272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62016 ] 00:28:40.585 [2024-11-20 05:40:00.404278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:40.844 [2024-11-20 05:40:00.548911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.844 [2024-11-20 05:40:00.548952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.412 Running I/O for 5 seconds... 00:28:43.736 18176.00 IOPS, 71.00 MiB/s [2024-11-20T05:40:04.605Z] 18624.00 IOPS, 72.75 MiB/s [2024-11-20T05:40:05.543Z] 18453.33 IOPS, 72.08 MiB/s [2024-11-20T05:40:06.481Z] 18672.00 IOPS, 72.94 MiB/s [2024-11-20T05:40:06.481Z] 18636.80 IOPS, 72.80 MiB/s 00:28:46.562 Latency(us) 00:28:46.562 [2024-11-20T05:40:06.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.562 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0xbd0bd 00:28:46.562 Nvme0n1 : 5.05 1521.97 5.95 0.00 0.00 83794.78 19231.52 101194.45 00:28:46.562 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:28:46.562 Nvme0n1 : 5.05 1571.99 6.14 0.00 0.00 81125.09 17056.53 77383.99 00:28:46.562 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0xa0000 00:28:46.562 Nvme1n1 : 5.05 1521.49 5.94 0.00 0.00 83664.37 22551.25 99362.88 00:28:46.562 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0xa0000 length 0xa0000 00:28:46.562 Nvme1n1 : 5.05 1571.45 6.14 0.00 0.00 80996.12 20261.79 70057.70 00:28:46.562 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0x80000 00:28:46.562 Nvme2n1 : 5.07 1526.96 5.96 0.00 0.00 83209.22 9901.95 97989.20 00:28:46.562 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x80000 length 0x80000 00:28:46.562 Nvme2n1 : 5.07 1576.86 6.16 0.00 0.00 80477.18 8699.98 62731.40 00:28:46.562 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0x80000 00:28:46.562 Nvme2n2 : 5.07 1526.42 5.96 0.00 0.00 83085.43 8242.08 93410.26 00:28:46.562 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x80000 length 0x80000 00:28:46.562 Nvme2n2 : 5.08 1576.06 6.16 0.00 0.00 80343.29 8127.61 63647.19 00:28:46.562 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0x80000 00:28:46.562 Nvme2n3 : 5.09 1534.03 5.99 0.00 0.00 82676.57 11103.92 92494.48 00:28:46.562 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x80000 length 0x80000 00:28:46.562 Nvme2n3 : 5.09 1583.35 6.18 0.00 0.00 79981.54 14137.46 65478.76 00:28:46.562 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x0 length 0x20000 00:28:46.562 Nvme3n1 : 5.09 1533.26 5.99 0.00 0.00 82536.11 11619.05 95241.84 00:28:46.562 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:46.562 Verification LBA range: start 0x20000 length 0x20000 00:28:46.562 Nvme3n1 : 5.10 1582.41 6.18 0.00 0.00 79873.82 12992.73 67768.23 00:28:46.562 [2024-11-20T05:40:06.481Z] =================================================================================================================== 00:28:46.562 [2024-11-20T05:40:06.481Z] Total : 18626.25 72.76 0.00 0.00 81788.66 8127.61 101194.45 00:28:48.463 00:28:48.463 real 0m7.937s 00:28:48.463 user 0m14.580s 00:28:48.463 sys 0m0.400s 00:28:48.463 05:40:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:48.463 05:40:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:28:48.463 ************************************ 00:28:48.463 END TEST bdev_verify 00:28:48.463 ************************************ 00:28:48.463 05:40:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:48.463 05:40:08 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:28:48.463 05:40:08 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:48.463 05:40:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:48.463 ************************************ 00:28:48.463 START TEST bdev_verify_big_io 00:28:48.463 ************************************ 00:28:48.463 05:40:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:48.463 [2024-11-20 05:40:08.218936] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:48.463 [2024-11-20 05:40:08.219107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62125 ] 00:28:48.721 [2024-11-20 05:40:08.410831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:48.721 [2024-11-20 05:40:08.557066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.721 [2024-11-20 05:40:08.557107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.681 Running I/O for 5 seconds... 00:28:52.442 752.00 IOPS, 47.00 MiB/s [2024-11-20T05:40:14.296Z] 1579.50 IOPS, 98.72 MiB/s [2024-11-20T05:40:15.236Z] 2110.00 IOPS, 131.88 MiB/s [2024-11-20T05:40:15.236Z] 2925.00 IOPS, 182.81 MiB/s 00:28:55.317 Latency(us) 00:28:55.317 [2024-11-20T05:40:15.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.317 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x0 length 0xbd0b 00:28:55.317 Nvme0n1 : 5.42 165.31 10.33 0.00 0.00 753333.48 40065.68 783913.59 00:28:55.317 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0xbd0b length 0xbd0b 00:28:55.317 Nvme0n1 : 5.48 163.41 10.21 0.00 0.00 761595.18 26672.29 805892.47 00:28:55.317 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x0 length 0xa000 00:28:55.317 Nvme1n1 : 5.53 164.93 10.31 0.00 0.00 729253.31 99362.88 685008.60 00:28:55.317 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0xa000 length 0xa000 00:28:55.317 Nvme1n1 : 5.60 163.38 10.21 0.00 0.00 736673.24 71431.38 670356.01 00:28:55.317 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x0 length 0x8000 00:28:55.317 Nvme2n1 : 5.60 171.32 10.71 0.00 0.00 694021.95 70057.70 663029.72 00:28:55.317 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x8000 length 0x8000 00:28:55.317 Nvme2n1 : 5.65 169.98 10.62 0.00 0.00 701178.92 40523.57 681345.45 00:28:55.317 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x0 length 0x8000 00:28:55.317 Nvme2n2 : 5.66 177.69 11.11 0.00 0.00 657868.16 35257.80 674019.16 00:28:55.317 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x8000 length 0x8000 00:28:55.317 Nvme2n2 : 5.69 176.47 11.03 0.00 0.00 660992.11 15568.38 699661.19 00:28:55.317 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.317 Verification LBA range: start 0x0 length 0x8000 00:28:55.317 Nvme2n3 : 5.66 180.97 11.31 0.00 0.00 631781.87 13794.04 688671.75 00:28:55.318 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.318 Verification LBA range: start 0x8000 length 0x8000 00:28:55.318 Nvme2n3 : 5.69 179.89 11.24 0.00 0.00 633541.65 24153.88 717976.93 00:28:55.318 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:55.318 Verification LBA range: start 0x0 length 0x2000 00:28:55.318 Nvme3n1 : 5.69 194.06 12.13 0.00 0.00 575123.45 6725.31 824208.21 00:28:55.318 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:55.318 Verification LBA range: start 0x2000 length 0x2000 00:28:55.318 Nvme3n1 : 5.71 190.59 11.91 0.00 0.00 583966.03 3577.29 739955.81 00:28:55.318 [2024-11-20T05:40:15.237Z] =================================================================================================================== 00:28:55.318 [2024-11-20T05:40:15.237Z] Total : 2098.01 131.13 0.00 0.00 672436.39 3577.29 824208.21 00:28:57.857 00:28:57.857 real 0m9.230s 00:28:57.857 user 0m17.131s 00:28:57.857 sys 0m0.426s 00:28:57.857 05:40:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:57.857 05:40:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.857 ************************************ 00:28:57.857 END TEST bdev_verify_big_io 00:28:57.857 ************************************ 00:28:57.857 05:40:17 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:57.857 05:40:17 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:28:57.857 05:40:17 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:57.857 05:40:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:28:57.857 ************************************ 00:28:57.857 START TEST bdev_write_zeroes 00:28:57.857 ************************************ 00:28:57.857 05:40:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:57.857 [2024-11-20 05:40:17.512540] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:28:57.857 [2024-11-20 05:40:17.512680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62240 ] 00:28:57.857 [2024-11-20 05:40:17.690868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.117 [2024-11-20 05:40:17.827704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.686 Running I/O for 1 seconds... 00:29:00.061 55680.00 IOPS, 217.50 MiB/s 00:29:00.061 Latency(us) 00:29:00.061 [2024-11-20T05:40:19.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.061 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme0n1 : 1.02 9260.07 36.17 0.00 0.00 13782.11 10245.37 34113.06 00:29:00.061 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme1n1 : 1.02 9248.83 36.13 0.00 0.00 13778.56 10417.08 34799.90 00:29:00.061 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme2n1 : 1.03 9239.09 36.09 0.00 0.00 13679.45 10188.13 29076.23 00:29:00.061 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme2n2 : 1.03 9284.06 36.27 0.00 0.00 13596.23 6324.65 27473.61 00:29:00.061 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme2n3 : 1.03 9273.66 36.23 0.00 0.00 13558.46 6639.46 25413.09 00:29:00.061 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:00.061 Nvme3n1 : 1.03 9262.62 36.18 0.00 0.00 13527.14 6982.88 22665.73 00:29:00.061 [2024-11-20T05:40:19.980Z] =================================================================================================================== 00:29:00.061 [2024-11-20T05:40:19.980Z] Total : 55568.34 217.06 0.00 0.00 13653.35 6324.65 34799.90 00:29:01.475 00:29:01.475 real 0m3.620s 00:29:01.475 user 0m3.170s 00:29:01.475 sys 0m0.330s 00:29:01.475 05:40:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.475 05:40:21 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:01.475 ************************************ 00:29:01.475 END TEST bdev_write_zeroes 00:29:01.475 ************************************ 00:29:01.475 05:40:21 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:01.475 05:40:21 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:01.475 05:40:21 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.475 05:40:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.475 ************************************ 00:29:01.475 START TEST bdev_json_nonenclosed 00:29:01.475 ************************************ 00:29:01.475 05:40:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:01.475 [2024-11-20 05:40:21.196550] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:01.475 [2024-11-20 05:40:21.196695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62304 ] 00:29:01.475 [2024-11-20 05:40:21.379019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.734 [2024-11-20 05:40:21.521141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.734 [2024-11-20 05:40:21.521264] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:01.734 [2024-11-20 05:40:21.521285] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:01.734 [2024-11-20 05:40:21.521295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:01.994 00:29:01.994 real 0m0.701s 00:29:01.994 user 0m0.440s 00:29:01.994 sys 0m0.157s 00:29:01.994 05:40:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:01.994 05:40:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:01.994 ************************************ 00:29:01.994 END TEST bdev_json_nonenclosed 00:29:01.994 ************************************ 00:29:01.994 05:40:21 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:01.994 05:40:21 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:01.994 05:40:21 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:01.994 05:40:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:01.994 ************************************ 00:29:01.994 START TEST bdev_json_nonarray 00:29:01.994 ************************************ 00:29:01.994 05:40:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:02.253 [2024-11-20 05:40:21.955652] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:02.253 [2024-11-20 05:40:21.955790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62335 ] 00:29:02.253 [2024-11-20 05:40:22.134668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.512 [2024-11-20 05:40:22.275509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.512 [2024-11-20 05:40:22.275641] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:02.512 [2024-11-20 05:40:22.275661] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:02.512 [2024-11-20 05:40:22.275672] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:02.772 00:29:02.772 real 0m0.689s 00:29:02.772 user 0m0.442s 00:29:02.772 sys 0m0.142s 00:29:02.772 05:40:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:02.772 05:40:22 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:02.772 ************************************ 00:29:02.772 END TEST bdev_json_nonarray 00:29:02.772 ************************************ 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:29:02.772 05:40:22 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:29:02.772 00:29:02.772 real 0m44.854s 00:29:02.772 user 1m6.179s 00:29:02.772 sys 0m8.033s 00:29:02.772 05:40:22 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:02.772 05:40:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:29:02.772 ************************************ 00:29:02.772 END TEST blockdev_nvme 00:29:02.772 ************************************ 00:29:02.772 05:40:22 -- spdk/autotest.sh@209 -- # uname -s 00:29:02.772 05:40:22 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:29:02.772 05:40:22 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:02.772 05:40:22 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:02.772 05:40:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:02.772 05:40:22 -- common/autotest_common.sh@10 -- # set +x 00:29:02.772 ************************************ 00:29:02.772 START TEST blockdev_nvme_gpt 00:29:02.772 ************************************ 00:29:02.772 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:03.032 * Looking for test storage... 00:29:03.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:03.032 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:03.032 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:29:03.032 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:03.032 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.032 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:29:03.033 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.033 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.033 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.033 05:40:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:03.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.033 --rc genhtml_branch_coverage=1 00:29:03.033 --rc genhtml_function_coverage=1 00:29:03.033 --rc genhtml_legend=1 00:29:03.033 --rc geninfo_all_blocks=1 00:29:03.033 --rc geninfo_unexecuted_blocks=1 00:29:03.033 00:29:03.033 ' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:03.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.033 --rc genhtml_branch_coverage=1 00:29:03.033 --rc genhtml_function_coverage=1 00:29:03.033 --rc genhtml_legend=1 00:29:03.033 --rc geninfo_all_blocks=1 00:29:03.033 --rc geninfo_unexecuted_blocks=1 00:29:03.033 00:29:03.033 ' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:03.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.033 --rc genhtml_branch_coverage=1 00:29:03.033 --rc genhtml_function_coverage=1 00:29:03.033 --rc genhtml_legend=1 00:29:03.033 --rc geninfo_all_blocks=1 00:29:03.033 --rc geninfo_unexecuted_blocks=1 00:29:03.033 00:29:03.033 ' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:03.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.033 --rc genhtml_branch_coverage=1 00:29:03.033 --rc genhtml_function_coverage=1 00:29:03.033 --rc genhtml_legend=1 00:29:03.033 --rc geninfo_all_blocks=1 00:29:03.033 --rc geninfo_unexecuted_blocks=1 00:29:03.033 00:29:03.033 ' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62419 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:03.033 05:40:22 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62419 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 62419 ']' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:03.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:03.033 05:40:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:03.291 [2024-11-20 05:40:23.040254] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:03.291 [2024-11-20 05:40:23.040377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62419 ] 00:29:03.550 [2024-11-20 05:40:23.220150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.550 [2024-11-20 05:40:23.360938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.487 05:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:04.487 05:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:29:04.487 05:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:29:04.487 05:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:29:04.487 05:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:05.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:05.314 Waiting for block devices as requested 00:29:05.314 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:05.573 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:05.573 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:05.573 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:10.861 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:10.861 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:10.861 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:29:10.861 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:10.862 BYT; 00:29:10.862 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:10.862 BYT; 00:29:10.862 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:10.862 05:40:30 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:10.862 05:40:30 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:11.799 The operation has completed successfully. 00:29:11.799 05:40:31 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:13.176 The operation has completed successfully. 00:29:13.177 05:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:13.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:14.370 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.370 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.370 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.370 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:29:14.370 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.370 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.370 [] 00:29:14.370 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:14.370 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:29:14.370 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.370 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:14.937 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:29:14.937 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:29:14.938 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "caebfc7e-f3c9-41b6-a1e6-558fa5cfe2dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "caebfc7e-f3c9-41b6-a1e6-558fa5cfe2dd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "27d27d6a-28e6-4279-a060-e927e0da35a0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27d27d6a-28e6-4279-a060-e927e0da35a0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ce8b926f-8aea-4855-8d9c-1cbf503536fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ce8b926f-8aea-4855-8d9c-1cbf503536fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f8ab5e3e-a8f6-4018-8f1d-20248f9edc38"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f8ab5e3e-a8f6-4018-8f1d-20248f9edc38",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2c771003-2ce6-4cf6-99bb-ceeab3c97fd0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c771003-2ce6-4cf6-99bb-ceeab3c97fd0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:14.938 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:29:14.938 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:29:14.938 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:29:14.938 05:40:34 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62419 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 62419 ']' 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 62419 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62419 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:14.938 killing process with pid 62419 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62419' 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 62419 00:29:14.938 05:40:34 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 62419 00:29:18.228 05:40:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:18.228 05:40:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:18.229 05:40:37 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:29:18.229 05:40:37 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:18.229 05:40:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:18.229 ************************************ 00:29:18.229 START TEST bdev_hello_world 00:29:18.229 ************************************ 00:29:18.229 05:40:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:18.229 [2024-11-20 05:40:37.555841] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:18.229 [2024-11-20 05:40:37.555987] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ] 00:29:18.229 [2024-11-20 05:40:37.737840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.229 [2024-11-20 05:40:37.880767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.794 [2024-11-20 05:40:38.607652] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:18.794 [2024-11-20 05:40:38.607731] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:18.794 [2024-11-20 05:40:38.607763] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:18.794 [2024-11-20 05:40:38.610862] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:18.794 [2024-11-20 05:40:38.611410] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:18.794 [2024-11-20 05:40:38.611448] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:18.794 [2024-11-20 05:40:38.611679] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:18.794 00:29:18.794 [2024-11-20 05:40:38.611710] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:20.244 00:29:20.244 real 0m2.395s 00:29:20.244 user 0m1.950s 00:29:20.244 sys 0m0.337s 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:29:20.244 ************************************ 00:29:20.244 END TEST bdev_hello_world 00:29:20.244 ************************************ 00:29:20.244 05:40:39 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:29:20.244 05:40:39 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:20.244 05:40:39 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:20.244 05:40:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:20.244 ************************************ 00:29:20.244 START TEST bdev_bounds 00:29:20.244 ************************************ 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63103 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:20.244 Process bdevio pid: 63103 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63103' 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63103 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 63103 ']' 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:20.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:20.244 05:40:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:20.244 [2024-11-20 05:40:40.026122] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:20.244 [2024-11-20 05:40:40.026274] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63103 ] 00:29:20.504 [2024-11-20 05:40:40.209714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.504 [2024-11-20 05:40:40.354426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.504 [2024-11-20 05:40:40.354571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.504 [2024-11-20 05:40:40.354619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.441 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:21.441 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:29:21.441 05:40:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:21.441 I/O targets: 00:29:21.441 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:29:21.441 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:21.441 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:21.441 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:21.441 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:21.441 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:29:21.441 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:29:21.441 00:29:21.441 00:29:21.441 CUnit - A unit testing framework for C - Version 2.1-3 00:29:21.441 http://cunit.sourceforge.net/ 00:29:21.441 00:29:21.441 00:29:21.441 Suite: bdevio tests on: Nvme3n1 00:29:21.441 Test: blockdev write read block ...passed 00:29:21.441 Test: blockdev write zeroes read block ...passed 00:29:21.441 Test: blockdev write zeroes read no split ...passed 00:29:21.441 Test: blockdev write zeroes read split ...passed 00:29:21.441 Test: blockdev write zeroes read split partial ...passed 00:29:21.441 Test: blockdev reset ...[2024-11-20 05:40:41.299171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:29:21.441 passed 00:29:21.441 Test: blockdev write read 8 blocks ...[2024-11-20 05:40:41.303872] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:29:21.441 passed 00:29:21.441 Test: blockdev write read size > 128k ...passed 00:29:21.441 Test: blockdev write read invalid size ...passed 00:29:21.441 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.441 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.441 Test: blockdev write read max offset ...passed 00:29:21.441 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.441 Test: blockdev writev readv 8 blocks ...passed 00:29:21.441 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.441 Test: blockdev writev readv block ...passed 00:29:21.441 Test: blockdev writev readv size > 128k ...passed 00:29:21.441 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.441 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.311961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0a04000 len:0x1000 00:29:21.441 [2024-11-20 05:40:41.312146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.441 passed 00:29:21.441 Test: blockdev nvme passthru rw ...passed 00:29:21.441 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:41.312952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:21.441 [2024-11-20 05:40:41.313077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:21.441 passed 00:29:21.441 Test: blockdev nvme admin passthru ...passed 00:29:21.441 Test: blockdev copy ...passed 00:29:21.441 Suite: bdevio tests on: Nvme2n3 00:29:21.441 Test: blockdev write read block ...passed 00:29:21.441 Test: blockdev write zeroes read block ...passed 00:29:21.441 Test: blockdev write zeroes read no split ...passed 00:29:21.701 Test: blockdev write zeroes read split ...passed 00:29:21.701 Test: blockdev write zeroes read split partial ...passed 00:29:21.701 Test: blockdev reset ...[2024-11-20 05:40:41.400156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:21.701 [2024-11-20 05:40:41.405073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:21.701 passed 00:29:21.701 Test: blockdev write read 8 blocks ...passed 00:29:21.701 Test: blockdev write read size > 128k ...passed 00:29:21.701 Test: blockdev write read invalid size ...passed 00:29:21.701 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.701 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.701 Test: blockdev write read max offset ...passed 00:29:21.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.701 Test: blockdev writev readv 8 blocks ...passed 00:29:21.701 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.701 Test: blockdev writev readv block ...passed 00:29:21.701 Test: blockdev writev readv size > 128k ...passed 00:29:21.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.701 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.413203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b0a02000 len:0x1000 00:29:21.701 [2024-11-20 05:40:41.413373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.701 passed 00:29:21.701 Test: blockdev nvme passthru rw ...passed 00:29:21.701 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:41.414106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:21.701 [2024-11-20 05:40:41.414234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:21.701 passed 00:29:21.701 Test: blockdev nvme admin passthru ...passed 00:29:21.701 Test: blockdev copy ...passed 00:29:21.701 Suite: bdevio tests on: Nvme2n2 00:29:21.701 Test: blockdev write read block ...passed 00:29:21.701 Test: blockdev write zeroes read block ...passed 00:29:21.701 Test: blockdev write zeroes read no split ...passed 00:29:21.701 Test: blockdev write zeroes read split ...passed 00:29:21.701 Test: blockdev write zeroes read split partial ...passed 00:29:21.701 Test: blockdev reset ...[2024-11-20 05:40:41.498801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:21.701 [2024-11-20 05:40:41.503425] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:21.701 passed 00:29:21.701 Test: blockdev write read 8 blocks ...passed 00:29:21.701 Test: blockdev write read size > 128k ...passed 00:29:21.701 Test: blockdev write read invalid size ...passed 00:29:21.701 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.701 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.701 Test: blockdev write read max offset ...passed 00:29:21.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.701 Test: blockdev writev readv 8 blocks ...passed 00:29:21.701 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.701 Test: blockdev writev readv block ...passed 00:29:21.701 Test: blockdev writev readv size > 128k ...passed 00:29:21.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.701 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.511569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4838000 len:0x1000 00:29:21.701 [2024-11-20 05:40:41.511715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.701 passed 00:29:21.701 Test: blockdev nvme passthru rw ...passed 00:29:21.701 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:41.512489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:21.701 [2024-11-20 05:40:41.512590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:21.701 passed 00:29:21.701 Test: blockdev nvme admin passthru ...passed 00:29:21.701 Test: blockdev copy ...passed 00:29:21.701 Suite: bdevio tests on: Nvme2n1 00:29:21.701 Test: blockdev write read block ...passed 00:29:21.701 Test: blockdev write zeroes read block ...passed 00:29:21.701 Test: blockdev write zeroes read no split ...passed 00:29:21.701 Test: blockdev write zeroes read split ...passed 00:29:21.701 Test: blockdev write zeroes read split partial ...passed 00:29:21.701 Test: blockdev reset ...[2024-11-20 05:40:41.595204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:29:21.701 [2024-11-20 05:40:41.599810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:29:21.701 passed 00:29:21.701 Test: blockdev write read 8 blocks ...passed 00:29:21.701 Test: blockdev write read size > 128k ...passed 00:29:21.701 Test: blockdev write read invalid size ...passed 00:29:21.701 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.701 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.701 Test: blockdev write read max offset ...passed 00:29:21.701 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.701 Test: blockdev writev readv 8 blocks ...passed 00:29:21.701 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.701 Test: blockdev writev readv block ...passed 00:29:21.701 Test: blockdev writev readv size > 128k ...passed 00:29:21.701 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.702 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.609024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4834000 len:0x1000 00:29:21.702 [2024-11-20 05:40:41.609196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.702 passed 00:29:21.702 Test: blockdev nvme passthru rw ...passed 00:29:21.702 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:41.610006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:21.702 [2024-11-20 05:40:41.610116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:21.702 passed 00:29:21.702 Test: blockdev nvme admin passthru ...passed 00:29:21.702 Test: blockdev copy ...passed 00:29:21.702 Suite: bdevio tests on: Nvme1n1p2 00:29:21.702 Test: blockdev write read block ...passed 00:29:21.702 Test: blockdev write zeroes read block ...passed 00:29:21.961 Test: blockdev write zeroes read no split ...passed 00:29:21.961 Test: blockdev write zeroes read split ...passed 00:29:21.961 Test: blockdev write zeroes read split partial ...passed 00:29:21.961 Test: blockdev reset ...[2024-11-20 05:40:41.697900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:21.961 [2024-11-20 05:40:41.702092] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:21.961 passed 00:29:21.961 Test: blockdev write read 8 blocks ...passed 00:29:21.961 Test: blockdev write read size > 128k ...passed 00:29:21.961 Test: blockdev write read invalid size ...passed 00:29:21.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.961 Test: blockdev write read max offset ...passed 00:29:21.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.961 Test: blockdev writev readv 8 blocks ...passed 00:29:21.961 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.961 Test: blockdev writev readv block ...passed 00:29:21.961 Test: blockdev writev readv size > 128k ...passed 00:29:21.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.961 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c4830000 len:0x1000 00:29:21.961 [2024-11-20 05:40:41.711453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.961 passed 00:29:21.961 Test: blockdev nvme passthru rw ...passed 00:29:21.961 Test: blockdev nvme passthru vendor specific ...passed 00:29:21.961 Test: blockdev nvme admin passthru ...passed 00:29:21.961 Test: blockdev copy ...passed 00:29:21.961 Suite: bdevio tests on: Nvme1n1p1 00:29:21.961 Test: blockdev write read block ...passed 00:29:21.961 Test: blockdev write zeroes read block ...passed 00:29:21.961 Test: blockdev write zeroes read no split ...passed 00:29:21.961 Test: blockdev write zeroes read split ...passed 00:29:21.961 Test: blockdev write zeroes read split partial ...passed 00:29:21.961 Test: blockdev reset ...[2024-11-20 05:40:41.788623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:29:21.961 [2024-11-20 05:40:41.793261] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:29:21.961 passed 00:29:21.961 Test: blockdev write read 8 blocks ...passed 00:29:21.961 Test: blockdev write read size > 128k ...passed 00:29:21.961 Test: blockdev write read invalid size ...passed 00:29:21.961 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:21.961 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:21.961 Test: blockdev write read max offset ...passed 00:29:21.961 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:21.961 Test: blockdev writev readv 8 blocks ...passed 00:29:21.961 Test: blockdev writev readv 30 x 1block ...passed 00:29:21.961 Test: blockdev writev readv block ...passed 00:29:21.961 Test: blockdev writev readv size > 128k ...passed 00:29:21.961 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:21.962 Test: blockdev comparev and writev ...[2024-11-20 05:40:41.802596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b0c0e000 len:0x1000 00:29:21.962 [2024-11-20 05:40:41.802774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:21.962 passed 00:29:21.962 Test: blockdev nvme passthru rw ...passed 00:29:21.962 Test: blockdev nvme passthru vendor specific ...passed 00:29:21.962 Test: blockdev nvme admin passthru ...passed 00:29:21.962 Test: blockdev copy ...passed 00:29:21.962 Suite: bdevio tests on: Nvme0n1 00:29:21.962 Test: blockdev write read block ...passed 00:29:21.962 Test: blockdev write zeroes read block ...passed 00:29:21.962 Test: blockdev write zeroes read no split ...passed 00:29:21.962 Test: blockdev write zeroes read split ...passed 00:29:22.221 Test: blockdev write zeroes read split partial ...passed 00:29:22.221 Test: blockdev reset ...[2024-11-20 05:40:41.880751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:29:22.221 [2024-11-20 05:40:41.885132] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:29:22.221 passed 00:29:22.221 Test: blockdev write read 8 blocks ...passed 00:29:22.221 Test: blockdev write read size > 128k ...passed 00:29:22.221 Test: blockdev write read invalid size ...passed 00:29:22.221 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:22.221 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:22.221 Test: blockdev write read max offset ...passed 00:29:22.221 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:22.221 Test: blockdev writev readv 8 blocks ...passed 00:29:22.221 Test: blockdev writev readv 30 x 1block ...passed 00:29:22.221 Test: blockdev writev readv block ...passed 00:29:22.221 Test: blockdev writev readv size > 128k ...passed 00:29:22.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:22.221 Test: blockdev comparev and writev ...passed 00:29:22.221 Test: blockdev nvme passthru rw ...[2024-11-20 05:40:41.892691] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:29:22.221 separate metadata which is not supported yet. 00:29:22.221 passed 00:29:22.221 Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:40:41.893383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:29:22.221 [2024-11-20 05:40:41.893432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:29:22.221 passed 00:29:22.221 Test: blockdev nvme admin passthru ...passed 00:29:22.221 Test: blockdev copy ...passed 00:29:22.221 00:29:22.221 Run Summary: Type Total Ran Passed Failed Inactive 00:29:22.221 suites 7 7 n/a 0 0 00:29:22.221 tests 161 161 161 0 0 00:29:22.221 asserts 1025 1025 1025 0 n/a 00:29:22.221 00:29:22.221 Elapsed time = 1.846 seconds 00:29:22.221 0 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63103 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 63103 ']' 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 63103 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:22.221 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63103 00:29:22.222 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:22.222 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:22.222 killing process with pid 63103 00:29:22.222 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63103' 00:29:22.222 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 63103 00:29:22.222 05:40:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 63103 00:29:23.601 05:40:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:29:23.602 00:29:23.602 real 0m3.193s 00:29:23.602 user 0m8.135s 00:29:23.602 sys 0m0.516s 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:29:23.602 ************************************ 00:29:23.602 END TEST bdev_bounds 00:29:23.602 ************************************ 00:29:23.602 05:40:43 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:23.602 05:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:29:23.602 05:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:23.602 05:40:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:23.602 ************************************ 00:29:23.602 START TEST bdev_nbd 00:29:23.602 ************************************ 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63168 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63168 /var/tmp/spdk-nbd.sock 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 63168 ']' 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:23.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:23.602 05:40:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:23.602 [2024-11-20 05:40:43.297219] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:23.602 [2024-11-20 05:40:43.297383] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.602 [2024-11-20 05:40:43.459834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.862 [2024-11-20 05:40:43.605618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:24.801 1+0 records in 00:29:24.801 1+0 records out 00:29:24.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736208 s, 5.6 MB/s 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:24.801 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.060 1+0 records in 00:29:25.060 1+0 records out 00:29:25.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641043 s, 6.4 MB/s 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:25.060 05:40:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.318 1+0 records in 00:29:25.318 1+0 records out 00:29:25.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679714 s, 6.0 MB/s 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:25.318 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:25.319 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:25.319 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:29:25.576 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:29:25.576 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:29:25.576 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:29:25.576 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:29:25.576 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.577 1+0 records in 00:29:25.577 1+0 records out 00:29:25.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613075 s, 6.7 MB/s 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:25.577 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.836 1+0 records in 00:29:25.836 1+0 records out 00:29:25.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590807 s, 6.9 MB/s 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:25.836 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.096 1+0 records in 00:29:26.096 1+0 records out 00:29:26.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710057 s, 5.8 MB/s 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:26.096 05:40:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:26.357 1+0 records in 00:29:26.357 1+0 records out 00:29:26.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768087 s, 5.3 MB/s 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:29:26.357 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd0", 00:29:26.617 "bdev_name": "Nvme0n1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd1", 00:29:26.617 "bdev_name": "Nvme1n1p1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd2", 00:29:26.617 "bdev_name": "Nvme1n1p2" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd3", 00:29:26.617 "bdev_name": "Nvme2n1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd4", 00:29:26.617 "bdev_name": "Nvme2n2" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd5", 00:29:26.617 "bdev_name": "Nvme2n3" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd6", 00:29:26.617 "bdev_name": "Nvme3n1" 00:29:26.617 } 00:29:26.617 ]' 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd0", 00:29:26.617 "bdev_name": "Nvme0n1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd1", 00:29:26.617 "bdev_name": "Nvme1n1p1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd2", 00:29:26.617 "bdev_name": "Nvme1n1p2" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd3", 00:29:26.617 "bdev_name": "Nvme2n1" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd4", 00:29:26.617 "bdev_name": "Nvme2n2" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd5", 00:29:26.617 "bdev_name": "Nvme2n3" 00:29:26.617 }, 00:29:26.617 { 00:29:26.617 "nbd_device": "/dev/nbd6", 00:29:26.617 "bdev_name": "Nvme3n1" 00:29:26.617 } 00:29:26.617 ]' 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:26.617 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:26.876 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.136 05:40:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.395 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.655 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:27.914 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.174 05:40:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:28.435 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:28.714 /dev/nbd0 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:28.714 1+0 records in 00:29:28.714 1+0 records out 00:29:28.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000702856 s, 5.8 MB/s 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:28.714 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:29:29.011 /dev/nbd1 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.011 1+0 records in 00:29:29.011 1+0 records out 00:29:29.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635021 s, 6.5 MB/s 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:29.011 05:40:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:29:29.271 /dev/nbd10 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.271 1+0 records in 00:29:29.271 1+0 records out 00:29:29.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903815 s, 4.5 MB/s 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:29.271 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:29:29.530 /dev/nbd11 00:29:29.530 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:29:29.530 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:29:29.530 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:29:29.530 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:29.530 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.531 1+0 records in 00:29:29.531 1+0 records out 00:29:29.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693073 s, 5.9 MB/s 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:29.531 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:29:29.790 /dev/nbd12 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:29.790 1+0 records in 00:29:29.790 1+0 records out 00:29:29.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000902196 s, 4.5 MB/s 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:29.790 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:29:30.049 /dev/nbd13 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.049 1+0 records in 00:29:30.049 1+0 records out 00:29:30.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000810868 s, 5.1 MB/s 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:30.049 05:40:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:29:30.308 /dev/nbd14 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.308 1+0 records in 00:29:30.308 1+0 records out 00:29:30.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868923 s, 4.7 MB/s 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:30.308 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd0", 00:29:30.568 "bdev_name": "Nvme0n1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd1", 00:29:30.568 "bdev_name": "Nvme1n1p1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd10", 00:29:30.568 "bdev_name": "Nvme1n1p2" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd11", 00:29:30.568 "bdev_name": "Nvme2n1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd12", 00:29:30.568 "bdev_name": "Nvme2n2" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd13", 00:29:30.568 "bdev_name": "Nvme2n3" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd14", 00:29:30.568 "bdev_name": "Nvme3n1" 00:29:30.568 } 00:29:30.568 ]' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd0", 00:29:30.568 "bdev_name": "Nvme0n1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd1", 00:29:30.568 "bdev_name": "Nvme1n1p1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd10", 00:29:30.568 "bdev_name": "Nvme1n1p2" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd11", 00:29:30.568 "bdev_name": "Nvme2n1" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd12", 00:29:30.568 "bdev_name": "Nvme2n2" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd13", 00:29:30.568 "bdev_name": "Nvme2n3" 00:29:30.568 }, 00:29:30.568 { 00:29:30.568 "nbd_device": "/dev/nbd14", 00:29:30.568 "bdev_name": "Nvme3n1" 00:29:30.568 } 00:29:30.568 ]' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:30.568 /dev/nbd1 00:29:30.568 /dev/nbd10 00:29:30.568 /dev/nbd11 00:29:30.568 /dev/nbd12 00:29:30.568 /dev/nbd13 00:29:30.568 /dev/nbd14' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:30.568 /dev/nbd1 00:29:30.568 /dev/nbd10 00:29:30.568 /dev/nbd11 00:29:30.568 /dev/nbd12 00:29:30.568 /dev/nbd13 00:29:30.568 /dev/nbd14' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:30.568 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:30.569 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:30.569 256+0 records in 00:29:30.569 256+0 records out 00:29:30.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138471 s, 75.7 MB/s 00:29:30.569 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.569 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:30.569 256+0 records in 00:29:30.569 256+0 records out 00:29:30.569 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110484 s, 9.5 MB/s 00:29:30.569 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.569 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:30.828 256+0 records in 00:29:30.828 256+0 records out 00:29:30.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112098 s, 9.4 MB/s 00:29:30.828 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.828 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:29:30.828 256+0 records in 00:29:30.828 256+0 records out 00:29:30.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116342 s, 9.0 MB/s 00:29:30.828 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:30.828 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:29:31.087 256+0 records in 00:29:31.087 256+0 records out 00:29:31.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110671 s, 9.5 MB/s 00:29:31.087 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:31.087 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:29:31.087 256+0 records in 00:29:31.087 256+0 records out 00:29:31.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11323 s, 9.3 MB/s 00:29:31.087 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:31.087 05:40:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:29:31.347 256+0 records in 00:29:31.347 256+0 records out 00:29:31.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108897 s, 9.6 MB/s 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:29:31.347 256+0 records in 00:29:31.347 256+0 records out 00:29:31.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108056 s, 9.7 MB/s 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.347 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.607 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.865 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.123 05:40:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.382 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.641 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.899 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.158 05:40:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:29:33.417 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:33.675 malloc_lvol_verify 00:29:33.676 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:33.934 549cf88e-21c5-4afd-9b6f-b436c52c5fad 00:29:33.934 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:33.934 b9b9b3e4-9ec1-429b-88e3-b29769e6e8b2 00:29:33.934 05:40:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:34.193 /dev/nbd0 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:29:34.193 mke2fs 1.47.0 (5-Feb-2023) 00:29:34.193 Discarding device blocks: 0/4096 done 00:29:34.193 Creating filesystem with 4096 1k blocks and 1024 inodes 00:29:34.193 00:29:34.193 Allocating group tables: 0/1 done 00:29:34.193 Writing inode tables: 0/1 done 00:29:34.193 Creating journal (1024 blocks): done 00:29:34.193 Writing superblocks and filesystem accounting information: 0/1 done 00:29:34.193 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.193 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63168 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 63168 ']' 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 63168 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63168 00:29:34.457 killing process with pid 63168 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63168' 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 63168 00:29:34.457 05:40:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 63168 00:29:35.848 ************************************ 00:29:35.848 END TEST bdev_nbd 00:29:35.848 ************************************ 00:29:35.848 05:40:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:29:35.848 00:29:35.848 real 0m12.463s 00:29:35.848 user 0m16.663s 00:29:35.848 sys 0m4.777s 00:29:35.848 05:40:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:35.848 05:40:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:29:35.848 skipping fio tests on NVMe due to multi-ns failures. 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:35.848 05:40:55 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:35.848 05:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:35.848 05:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:35.848 05:40:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:35.848 ************************************ 00:29:35.848 START TEST bdev_verify 00:29:35.848 ************************************ 00:29:35.848 05:40:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:36.108 [2024-11-20 05:40:55.816967] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:36.108 [2024-11-20 05:40:55.817098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63595 ] 00:29:36.108 [2024-11-20 05:40:55.998819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:36.368 [2024-11-20 05:40:56.142283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.368 [2024-11-20 05:40:56.142334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.308 Running I/O for 5 seconds... 00:29:39.625 19584.00 IOPS, 76.50 MiB/s [2024-11-20T05:41:00.486Z] 19296.00 IOPS, 75.38 MiB/s [2024-11-20T05:41:01.435Z] 19306.67 IOPS, 75.42 MiB/s [2024-11-20T05:41:02.373Z] 19296.00 IOPS, 75.38 MiB/s [2024-11-20T05:41:02.373Z] 19033.60 IOPS, 74.35 MiB/s 00:29:42.454 Latency(us) 00:29:42.454 [2024-11-20T05:41:02.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0xbd0bd 00:29:42.454 Nvme0n1 : 5.06 1340.47 5.24 0.00 0.00 95261.88 21177.57 81962.93 00:29:42.454 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:29:42.454 Nvme0n1 : 5.07 1337.71 5.23 0.00 0.00 94795.77 22780.20 72805.06 00:29:42.454 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0x4ff80 00:29:42.454 Nvme1n1p1 : 5.06 1339.97 5.23 0.00 0.00 95160.96 21406.52 75094.53 00:29:42.454 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x4ff80 length 0x4ff80 00:29:42.454 Nvme1n1p1 : 5.07 1337.01 5.22 0.00 0.00 94663.44 21749.94 75552.42 00:29:42.454 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0x4ff7f 00:29:42.454 Nvme1n1p2 : 5.07 1339.19 5.23 0.00 0.00 94996.90 22207.83 73262.95 00:29:42.454 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:29:42.454 Nvme1n1p2 : 5.08 1346.72 5.26 0.00 0.00 93906.20 5695.05 76926.10 00:29:42.454 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0x80000 00:29:42.454 Nvme2n1 : 5.07 1338.45 5.23 0.00 0.00 94859.51 23352.57 72805.06 00:29:42.454 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x80000 length 0x80000 00:29:42.454 Nvme2n1 : 5.09 1346.41 5.26 0.00 0.00 93768.56 5838.14 79673.46 00:29:42.454 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0x80000 00:29:42.454 Nvme2n2 : 5.07 1337.78 5.23 0.00 0.00 94728.24 23009.15 71889.27 00:29:42.454 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x80000 length 0x80000 00:29:42.454 Nvme2n2 : 5.06 1339.92 5.23 0.00 0.00 95280.88 22894.67 82878.71 00:29:42.454 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x0 length 0x80000 00:29:42.454 Nvme2n3 : 5.07 1337.11 5.22 0.00 0.00 94598.61 21520.99 69141.91 00:29:42.454 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.454 Verification LBA range: start 0x80000 length 0x80000 00:29:42.454 Nvme2n3 : 5.07 1339.17 5.23 0.00 0.00 95109.40 23581.51 72805.06 00:29:42.454 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.455 Verification LBA range: start 0x0 length 0x20000 00:29:42.455 Nvme3n1 : 5.08 1347.37 5.26 0.00 0.00 93817.25 4779.26 71889.27 00:29:42.455 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:42.455 Verification LBA range: start 0x20000 length 0x20000 00:29:42.455 Nvme3n1 : 5.07 1338.38 5.23 0.00 0.00 94944.01 24268.35 71889.27 00:29:42.455 [2024-11-20T05:41:02.374Z] =================================================================================================================== 00:29:42.455 [2024-11-20T05:41:02.374Z] Total : 18765.67 73.30 0.00 0.00 94704.78 4779.26 82878.71 00:29:43.835 00:29:43.835 real 0m8.016s 00:29:43.835 user 0m14.730s 00:29:43.835 sys 0m0.400s 00:29:43.835 05:41:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:43.835 05:41:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:29:43.835 ************************************ 00:29:43.835 END TEST bdev_verify 00:29:43.835 ************************************ 00:29:44.095 05:41:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:44.095 05:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:29:44.095 05:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:44.095 05:41:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:44.095 ************************************ 00:29:44.095 START TEST bdev_verify_big_io 00:29:44.095 ************************************ 00:29:44.095 05:41:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:44.095 [2024-11-20 05:41:03.889151] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:44.095 [2024-11-20 05:41:03.889327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63703 ] 00:29:44.355 [2024-11-20 05:41:04.053795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:44.355 [2024-11-20 05:41:04.197435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.355 [2024-11-20 05:41:04.197488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.294 Running I/O for 5 seconds... 00:29:50.490 1294.00 IOPS, 80.88 MiB/s [2024-11-20T05:41:10.978Z] 3075.50 IOPS, 192.22 MiB/s [2024-11-20T05:41:11.237Z] 3648.00 IOPS, 228.00 MiB/s 00:29:51.318 Latency(us) 00:29:51.318 [2024-11-20T05:41:11.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.318 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x0 length 0xbd0b 00:29:51.318 Nvme0n1 : 5.69 142.96 8.94 0.00 0.00 867858.96 27817.03 926776.34 00:29:51.318 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0xbd0b length 0xbd0b 00:29:51.318 Nvme0n1 : 5.70 153.01 9.56 0.00 0.00 748989.76 22894.67 1135575.76 00:29:51.318 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x0 length 0x4ff8 00:29:51.318 Nvme1n1p1 : 5.69 138.00 8.63 0.00 0.00 875396.76 58381.41 1370017.20 00:29:51.318 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x4ff8 length 0x4ff8 00:29:51.318 Nvme1n1p1 : 5.75 161.46 10.09 0.00 0.00 692190.20 22894.67 901134.31 00:29:51.318 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x0 length 0x4ff7 00:29:51.318 Nvme1n1p2 : 5.75 142.36 8.90 0.00 0.00 831709.01 38692.00 1384669.79 00:29:51.318 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x4ff7 length 0x4ff7 00:29:51.318 Nvme1n1p2 : 5.80 181.22 11.33 0.00 0.00 604974.39 1731.41 912123.75 00:29:51.318 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x0 length 0x8000 00:29:51.318 Nvme2n1 : 5.75 142.33 8.90 0.00 0.00 810225.53 38920.94 1413974.97 00:29:51.318 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.318 Verification LBA range: start 0x8000 length 0x8000 00:29:51.319 Nvme2n1 : 5.63 147.34 9.21 0.00 0.00 843477.43 20261.79 904797.46 00:29:51.319 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x0 length 0x8000 00:29:51.319 Nvme2n2 : 5.75 146.91 9.18 0.00 0.00 771096.23 40981.46 1435953.86 00:29:51.319 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x8000 length 0x8000 00:29:51.319 Nvme2n2 : 5.67 146.95 9.18 0.00 0.00 824931.47 52886.69 805892.47 00:29:51.319 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x0 length 0x8000 00:29:51.319 Nvme2n3 : 5.82 157.42 9.84 0.00 0.00 703384.39 17972.32 1450606.45 00:29:51.319 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x8000 length 0x8000 00:29:51.319 Nvme2n3 : 5.70 152.55 9.53 0.00 0.00 783616.80 39836.73 864502.83 00:29:51.319 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x0 length 0x2000 00:29:51.319 Nvme3n1 : 5.86 183.82 11.49 0.00 0.00 589579.50 5351.63 860839.69 00:29:51.319 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:51.319 Verification LBA range: start 0x2000 length 0x2000 00:29:51.319 Nvme3n1 : 5.67 152.12 9.51 0.00 0.00 769410.62 39836.73 868165.98 00:29:51.319 [2024-11-20T05:41:11.238Z] =================================================================================================================== 00:29:51.319 [2024-11-20T05:41:11.238Z] Total : 2148.48 134.28 0.00 0.00 757426.21 1731.41 1450606.45 00:29:53.922 00:29:53.922 real 0m9.574s 00:29:53.922 user 0m17.824s 00:29:53.922 sys 0m0.434s 00:29:53.922 05:41:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.922 05:41:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:29:53.922 ************************************ 00:29:53.922 END TEST bdev_verify_big_io 00:29:53.922 ************************************ 00:29:53.922 05:41:13 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:53.922 05:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:53.922 05:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.922 05:41:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:53.922 ************************************ 00:29:53.922 START TEST bdev_write_zeroes 00:29:53.922 ************************************ 00:29:53.922 05:41:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:53.922 [2024-11-20 05:41:13.520445] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:53.922 [2024-11-20 05:41:13.520556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:29:53.922 [2024-11-20 05:41:13.701096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.181 [2024-11-20 05:41:13.842669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.750 Running I/O for 1 seconds... 00:29:56.123 60032.00 IOPS, 234.50 MiB/s 00:29:56.124 Latency(us) 00:29:56.124 [2024-11-20T05:41:16.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.124 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme0n1 : 1.03 8531.58 33.33 0.00 0.00 14968.09 12821.02 36631.48 00:29:56.124 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme1n1p1 : 1.03 8522.98 33.29 0.00 0.00 14960.67 12935.49 37318.32 00:29:56.124 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme1n1p2 : 1.03 8514.74 33.26 0.00 0.00 14851.78 12477.60 30907.81 00:29:56.124 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme2n1 : 1.03 8506.93 33.23 0.00 0.00 14824.89 12878.25 29992.02 00:29:56.124 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme2n2 : 1.03 8498.32 33.20 0.00 0.00 14795.16 12477.60 28732.81 00:29:56.124 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme2n3 : 1.03 8490.69 33.17 0.00 0.00 14760.48 10932.21 26901.24 00:29:56.124 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:56.124 Nvme3n1 : 1.03 8483.16 33.14 0.00 0.00 14731.44 9043.40 26443.35 00:29:56.124 [2024-11-20T05:41:16.043Z] =================================================================================================================== 00:29:56.124 [2024-11-20T05:41:16.043Z] Total : 59548.41 232.61 0.00 0.00 14841.79 9043.40 37318.32 00:29:57.062 00:29:57.062 real 0m3.497s 00:29:57.062 user 0m3.051s 00:29:57.062 sys 0m0.330s 00:29:57.062 05:41:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.062 05:41:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:29:57.062 ************************************ 00:29:57.062 END TEST bdev_write_zeroes 00:29:57.062 ************************************ 00:29:57.062 05:41:16 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:57.062 05:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:57.062 05:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.062 05:41:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:57.062 ************************************ 00:29:57.062 START TEST bdev_json_nonenclosed 00:29:57.062 ************************************ 00:29:57.062 05:41:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:57.321 [2024-11-20 05:41:17.075189] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:57.321 [2024-11-20 05:41:17.075348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63883 ] 00:29:57.581 [2024-11-20 05:41:17.257267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.581 [2024-11-20 05:41:17.405557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.581 [2024-11-20 05:41:17.405675] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:57.581 [2024-11-20 05:41:17.405697] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:57.581 [2024-11-20 05:41:17.405708] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:57.842 00:29:57.842 real 0m0.698s 00:29:57.842 user 0m0.438s 00:29:57.842 sys 0m0.155s 00:29:57.842 05:41:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:57.842 05:41:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:29:57.842 ************************************ 00:29:57.842 END TEST bdev_json_nonenclosed 00:29:57.842 ************************************ 00:29:57.842 05:41:17 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:57.842 05:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:29:57.842 05:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:57.842 05:41:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:57.842 ************************************ 00:29:57.842 START TEST bdev_json_nonarray 00:29:57.842 ************************************ 00:29:57.842 05:41:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:58.100 [2024-11-20 05:41:17.835491] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:58.100 [2024-11-20 05:41:17.835635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63909 ] 00:29:58.100 [2024-11-20 05:41:18.017288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.362 [2024-11-20 05:41:18.159651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.362 [2024-11-20 05:41:18.159798] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:58.362 [2024-11-20 05:41:18.159827] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:58.362 [2024-11-20 05:41:18.159838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:58.625 00:29:58.625 real 0m0.683s 00:29:58.625 user 0m0.438s 00:29:58.625 sys 0m0.140s 00:29:58.625 05:41:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:58.625 05:41:18 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 ************************************ 00:29:58.625 END TEST bdev_json_nonarray 00:29:58.625 ************************************ 00:29:58.625 05:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:29:58.625 05:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:29:58.625 05:41:18 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:29:58.626 05:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:58.626 05:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:58.626 05:41:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:29:58.626 ************************************ 00:29:58.626 START TEST bdev_gpt_uuid 00:29:58.626 ************************************ 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63940 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63940 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63940 ']' 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:58.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:58.626 05:41:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:29:58.885 [2024-11-20 05:41:18.612001] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:29:58.885 [2024-11-20 05:41:18.612260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63940 ] 00:29:58.885 [2024-11-20 05:41:18.794617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.144 [2024-11-20 05:41:18.936319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.082 05:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.082 05:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:30:00.082 05:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:00.082 05:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.082 05:41:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:00.650 Some configs were skipped because the RPC state that can call them passed over. 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:30:00.650 { 00:30:00.650 "name": "Nvme1n1p1", 00:30:00.650 "aliases": [ 00:30:00.650 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:00.650 ], 00:30:00.650 "product_name": "GPT Disk", 00:30:00.650 "block_size": 4096, 00:30:00.650 "num_blocks": 655104, 00:30:00.650 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:00.650 "assigned_rate_limits": { 00:30:00.650 "rw_ios_per_sec": 0, 00:30:00.650 "rw_mbytes_per_sec": 0, 00:30:00.650 "r_mbytes_per_sec": 0, 00:30:00.650 "w_mbytes_per_sec": 0 00:30:00.650 }, 00:30:00.650 "claimed": false, 00:30:00.650 "zoned": false, 00:30:00.650 "supported_io_types": { 00:30:00.650 "read": true, 00:30:00.650 "write": true, 00:30:00.650 "unmap": true, 00:30:00.650 "flush": true, 00:30:00.650 "reset": true, 00:30:00.650 "nvme_admin": false, 00:30:00.650 "nvme_io": false, 00:30:00.650 "nvme_io_md": false, 00:30:00.650 "write_zeroes": true, 00:30:00.650 "zcopy": false, 00:30:00.650 "get_zone_info": false, 00:30:00.650 "zone_management": false, 00:30:00.650 "zone_append": false, 00:30:00.650 "compare": true, 00:30:00.650 "compare_and_write": false, 00:30:00.650 "abort": true, 00:30:00.650 "seek_hole": false, 00:30:00.650 "seek_data": false, 00:30:00.650 "copy": true, 00:30:00.650 "nvme_iov_md": false 00:30:00.650 }, 00:30:00.650 "driver_specific": { 00:30:00.650 "gpt": { 00:30:00.650 "base_bdev": "Nvme1n1", 00:30:00.650 "offset_blocks": 256, 00:30:00.650 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:00.650 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:00.650 "partition_name": "SPDK_TEST_first" 00:30:00.650 } 00:30:00.650 } 00:30:00.650 } 00:30:00.650 ]' 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:00.650 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:30:00.651 { 00:30:00.651 "name": "Nvme1n1p2", 00:30:00.651 "aliases": [ 00:30:00.651 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:00.651 ], 00:30:00.651 "product_name": "GPT Disk", 00:30:00.651 "block_size": 4096, 00:30:00.651 "num_blocks": 655103, 00:30:00.651 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:00.651 "assigned_rate_limits": { 00:30:00.651 "rw_ios_per_sec": 0, 00:30:00.651 "rw_mbytes_per_sec": 0, 00:30:00.651 "r_mbytes_per_sec": 0, 00:30:00.651 "w_mbytes_per_sec": 0 00:30:00.651 }, 00:30:00.651 "claimed": false, 00:30:00.651 "zoned": false, 00:30:00.651 "supported_io_types": { 00:30:00.651 "read": true, 00:30:00.651 "write": true, 00:30:00.651 "unmap": true, 00:30:00.651 "flush": true, 00:30:00.651 "reset": true, 00:30:00.651 "nvme_admin": false, 00:30:00.651 "nvme_io": false, 00:30:00.651 "nvme_io_md": false, 00:30:00.651 "write_zeroes": true, 00:30:00.651 "zcopy": false, 00:30:00.651 "get_zone_info": false, 00:30:00.651 "zone_management": false, 00:30:00.651 "zone_append": false, 00:30:00.651 "compare": true, 00:30:00.651 "compare_and_write": false, 00:30:00.651 "abort": true, 00:30:00.651 "seek_hole": false, 00:30:00.651 "seek_data": false, 00:30:00.651 "copy": true, 00:30:00.651 "nvme_iov_md": false 00:30:00.651 }, 00:30:00.651 "driver_specific": { 00:30:00.651 "gpt": { 00:30:00.651 "base_bdev": "Nvme1n1", 00:30:00.651 "offset_blocks": 655360, 00:30:00.651 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:00.651 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:00.651 "partition_name": "SPDK_TEST_second" 00:30:00.651 } 00:30:00.651 } 00:30:00.651 } 00:30:00.651 ]' 00:30:00.651 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63940 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63940 ']' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63940 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63940 00:30:00.910 killing process with pid 63940 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63940' 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63940 00:30:00.910 05:41:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63940 00:30:04.202 00:30:04.202 real 0m4.934s 00:30:04.202 user 0m4.907s 00:30:04.202 sys 0m0.713s 00:30:04.202 05:41:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:04.202 ************************************ 00:30:04.202 END TEST bdev_gpt_uuid 00:30:04.202 ************************************ 00:30:04.202 05:41:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:30:04.202 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:30:04.202 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:30:04.202 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:30:04.203 05:41:23 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:04.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:04.462 Waiting for block devices as requested 00:30:04.462 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.462 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.462 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.721 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.086 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:10.086 05:41:29 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:30:10.086 05:41:29 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:30:10.086 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:10.086 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:10.086 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:10.086 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:10.086 05:41:29 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:30:10.086 00:30:10.086 real 1m7.155s 00:30:10.086 user 1m24.136s 00:30:10.086 sys 0m11.963s 00:30:10.086 ************************************ 00:30:10.086 END TEST blockdev_nvme_gpt 00:30:10.086 ************************************ 00:30:10.086 05:41:29 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:10.086 05:41:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:30:10.086 05:41:29 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:10.086 05:41:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:10.086 05:41:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:10.086 05:41:29 -- common/autotest_common.sh@10 -- # set +x 00:30:10.086 ************************************ 00:30:10.086 START TEST nvme 00:30:10.086 ************************************ 00:30:10.086 05:41:29 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:10.086 * Looking for test storage... 00:30:10.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:10.345 05:41:30 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:10.345 05:41:30 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:10.345 05:41:30 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:30:10.345 05:41:30 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.345 05:41:30 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.345 05:41:30 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.345 05:41:30 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.345 05:41:30 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.345 05:41:30 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.345 05:41:30 nvme -- scripts/common.sh@344 -- # case "$op" in 00:30:10.345 05:41:30 nvme -- scripts/common.sh@345 -- # : 1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.345 05:41:30 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.345 05:41:30 nvme -- scripts/common.sh@365 -- # decimal 1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@353 -- # local d=1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.345 05:41:30 nvme -- scripts/common.sh@355 -- # echo 1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.345 05:41:30 nvme -- scripts/common.sh@366 -- # decimal 2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@353 -- # local d=2 00:30:10.345 05:41:30 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.345 05:41:30 nvme -- scripts/common.sh@355 -- # echo 2 00:30:10.346 05:41:30 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.346 05:41:30 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.346 05:41:30 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.346 05:41:30 nvme -- scripts/common.sh@368 -- # return 0 00:30:10.346 05:41:30 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.346 05:41:30 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.346 --rc genhtml_branch_coverage=1 00:30:10.346 --rc genhtml_function_coverage=1 00:30:10.346 --rc genhtml_legend=1 00:30:10.346 --rc geninfo_all_blocks=1 00:30:10.346 --rc geninfo_unexecuted_blocks=1 00:30:10.346 00:30:10.346 ' 00:30:10.346 05:41:30 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.346 --rc genhtml_branch_coverage=1 00:30:10.346 --rc genhtml_function_coverage=1 00:30:10.346 --rc genhtml_legend=1 00:30:10.346 --rc geninfo_all_blocks=1 00:30:10.346 --rc geninfo_unexecuted_blocks=1 00:30:10.346 00:30:10.346 ' 00:30:10.346 05:41:30 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.346 --rc genhtml_branch_coverage=1 00:30:10.346 --rc genhtml_function_coverage=1 00:30:10.346 --rc genhtml_legend=1 00:30:10.346 --rc geninfo_all_blocks=1 00:30:10.346 --rc geninfo_unexecuted_blocks=1 00:30:10.346 00:30:10.346 ' 00:30:10.346 05:41:30 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.346 --rc genhtml_branch_coverage=1 00:30:10.346 --rc genhtml_function_coverage=1 00:30:10.346 --rc genhtml_legend=1 00:30:10.346 --rc geninfo_all_blocks=1 00:30:10.346 --rc geninfo_unexecuted_blocks=1 00:30:10.346 00:30:10.346 ' 00:30:10.346 05:41:30 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:10.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:11.852 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.852 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.852 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.852 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:11.852 05:41:31 nvme -- nvme/nvme.sh@79 -- # uname 00:30:11.852 05:41:31 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:11.852 05:41:31 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:11.852 05:41:31 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1073 -- # stubpid=64598 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:30:11.852 Waiting for stub to ready for secondary processes... 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64598 ]] 00:30:11.852 05:41:31 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:30:11.852 [2024-11-20 05:41:31.736176] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:30:11.852 [2024-11-20 05:41:31.736441] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:30:12.790 05:41:32 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:12.790 05:41:32 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64598 ]] 00:30:12.790 05:41:32 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:30:13.730 [2024-11-20 05:41:33.441196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:13.730 [2024-11-20 05:41:33.591364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.730 [2024-11-20 05:41:33.591515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.730 [2024-11-20 05:41:33.591554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.730 [2024-11-20 05:41:33.610393] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:30:13.730 [2024-11-20 05:41:33.610520] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.730 [2024-11-20 05:41:33.626519] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:13.730 [2024-11-20 05:41:33.626785] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:13.730 [2024-11-20 05:41:33.630529] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.730 [2024-11-20 05:41:33.630897] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:30:13.730 [2024-11-20 05:41:33.631078] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:30:13.730 [2024-11-20 05:41:33.636201] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.730 [2024-11-20 05:41:33.636632] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:30:13.730 [2024-11-20 05:41:33.637100] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:30:13.730 [2024-11-20 05:41:33.641938] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:13.730 [2024-11-20 05:41:33.642209] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:30:13.730 [2024-11-20 05:41:33.642336] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:30:13.730 [2024-11-20 05:41:33.642422] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:30:13.730 [2024-11-20 05:41:33.642533] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:30:13.990 05:41:33 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:13.990 done. 00:30:13.990 05:41:33 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:30:13.990 05:41:33 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:13.990 05:41:33 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:30:13.990 05:41:33 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:13.990 05:41:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:13.990 ************************************ 00:30:13.990 START TEST nvme_reset 00:30:13.990 ************************************ 00:30:13.990 05:41:33 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:14.249 Initializing NVMe Controllers 00:30:14.249 Skipping QEMU NVMe SSD at 0000:00:10.0 00:30:14.249 Skipping QEMU NVMe SSD at 0000:00:11.0 00:30:14.249 Skipping QEMU NVMe SSD at 0000:00:13.0 00:30:14.249 Skipping QEMU NVMe SSD at 0000:00:12.0 00:30:14.249 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:14.249 00:30:14.249 real 0m0.299s 00:30:14.249 user 0m0.097s 00:30:14.249 sys 0m0.147s 00:30:14.249 05:41:33 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:14.249 ************************************ 00:30:14.249 END TEST nvme_reset 00:30:14.249 ************************************ 00:30:14.249 05:41:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:30:14.249 05:41:34 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:14.249 05:41:34 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:14.249 05:41:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:14.250 05:41:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:14.250 ************************************ 00:30:14.250 START TEST nvme_identify 00:30:14.250 ************************************ 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:30:14.250 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:30:14.250 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:14.250 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:14.250 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:14.250 05:41:34 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:14.250 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:14.525 [2024-11-20 05:41:34.373334] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64631 terminated unexpected 00:30:14.525 ===================================================== 00:30:14.525 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:14.525 ===================================================== 00:30:14.525 Controller Capabilities/Features 00:30:14.525 ================================ 00:30:14.525 Vendor ID: 1b36 00:30:14.525 Subsystem Vendor ID: 1af4 00:30:14.525 Serial Number: 12340 00:30:14.525 Model Number: QEMU NVMe Ctrl 00:30:14.525 Firmware Version: 8.0.0 00:30:14.525 Recommended Arb Burst: 6 00:30:14.525 IEEE OUI Identifier: 00 54 52 00:30:14.525 Multi-path I/O 00:30:14.525 May have multiple subsystem ports: No 00:30:14.525 May have multiple controllers: No 00:30:14.525 Associated with SR-IOV VF: No 00:30:14.525 Max Data Transfer Size: 524288 00:30:14.525 Max Number of Namespaces: 256 00:30:14.525 Max Number of I/O Queues: 64 00:30:14.525 NVMe Specification Version (VS): 1.4 00:30:14.525 NVMe Specification Version (Identify): 1.4 00:30:14.525 Maximum Queue Entries: 2048 00:30:14.525 Contiguous Queues Required: Yes 00:30:14.525 Arbitration Mechanisms Supported 00:30:14.525 Weighted Round Robin: Not Supported 00:30:14.525 Vendor Specific: Not Supported 00:30:14.525 Reset Timeout: 7500 ms 00:30:14.525 Doorbell Stride: 4 bytes 00:30:14.525 NVM Subsystem Reset: Not Supported 00:30:14.525 Command Sets Supported 00:30:14.525 NVM Command Set: Supported 00:30:14.525 Boot Partition: Not Supported 00:30:14.525 Memory Page Size Minimum: 4096 bytes 00:30:14.525 Memory Page Size Maximum: 65536 bytes 00:30:14.525 Persistent Memory Region: Not Supported 00:30:14.525 Optional Asynchronous Events Supported 00:30:14.525 Namespace Attribute Notices: Supported 00:30:14.525 Firmware Activation Notices: Not Supported 00:30:14.525 ANA Change Notices: Not Supported 00:30:14.525 PLE Aggregate Log Change Notices: Not Supported 00:30:14.525 LBA Status Info Alert Notices: Not Supported 00:30:14.525 EGE Aggregate Log Change Notices: Not Supported 00:30:14.525 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.525 Zone Descriptor Change Notices: Not Supported 00:30:14.525 Discovery Log Change Notices: Not Supported 00:30:14.525 Controller Attributes 00:30:14.525 128-bit Host Identifier: Not Supported 00:30:14.525 Non-Operational Permissive Mode: Not Supported 00:30:14.525 NVM Sets: Not Supported 00:30:14.525 Read Recovery Levels: Not Supported 00:30:14.525 Endurance Groups: Not Supported 00:30:14.525 Predictable Latency Mode: Not Supported 00:30:14.525 Traffic Based Keep ALive: Not Supported 00:30:14.525 Namespace Granularity: Not Supported 00:30:14.525 SQ Associations: Not Supported 00:30:14.525 UUID List: Not Supported 00:30:14.525 Multi-Domain Subsystem: Not Supported 00:30:14.525 Fixed Capacity Management: Not Supported 00:30:14.525 Variable Capacity Management: Not Supported 00:30:14.525 Delete Endurance Group: Not Supported 00:30:14.525 Delete NVM Set: Not Supported 00:30:14.525 Extended LBA Formats Supported: Supported 00:30:14.525 Flexible Data Placement Supported: Not Supported 00:30:14.525 00:30:14.525 Controller Memory Buffer Support 00:30:14.525 ================================ 00:30:14.525 Supported: No 00:30:14.525 00:30:14.525 Persistent Memory Region Support 00:30:14.525 ================================ 00:30:14.525 Supported: No 00:30:14.525 00:30:14.525 Admin Command Set Attributes 00:30:14.525 ============================ 00:30:14.525 Security Send/Receive: Not Supported 00:30:14.525 Format NVM: Supported 00:30:14.525 Firmware Activate/Download: Not Supported 00:30:14.525 Namespace Management: Supported 00:30:14.525 Device Self-Test: Not Supported 00:30:14.525 Directives: Supported 00:30:14.525 NVMe-MI: Not Supported 00:30:14.525 Virtualization Management: Not Supported 00:30:14.525 Doorbell Buffer Config: Supported 00:30:14.525 Get LBA Status Capability: Not Supported 00:30:14.525 Command & Feature Lockdown Capability: Not Supported 00:30:14.525 Abort Command Limit: 4 00:30:14.525 Async Event Request Limit: 4 00:30:14.525 Number of Firmware Slots: N/A 00:30:14.525 Firmware Slot 1 Read-Only: N/A 00:30:14.525 Firmware Activation Without Reset: N/A 00:30:14.525 Multiple Update Detection Support: N/A 00:30:14.525 Firmware Update Granularity: No Information Provided 00:30:14.525 Per-Namespace SMART Log: Yes 00:30:14.525 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.525 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:14.525 Command Effects Log Page: Supported 00:30:14.525 Get Log Page Extended Data: Supported 00:30:14.525 Telemetry Log Pages: Not Supported 00:30:14.525 Persistent Event Log Pages: Not Supported 00:30:14.525 Supported Log Pages Log Page: May Support 00:30:14.525 Commands Supported & Effects Log Page: Not Supported 00:30:14.525 Feature Identifiers & Effects Log Page:May Support 00:30:14.525 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.525 Data Area 4 for Telemetry Log: Not Supported 00:30:14.525 Error Log Page Entries Supported: 1 00:30:14.525 Keep Alive: Not Supported 00:30:14.525 00:30:14.525 NVM Command Set Attributes 00:30:14.526 ========================== 00:30:14.526 Submission Queue Entry Size 00:30:14.526 Max: 64 00:30:14.526 Min: 64 00:30:14.526 Completion Queue Entry Size 00:30:14.526 Max: 16 00:30:14.526 Min: 16 00:30:14.526 Number of Namespaces: 256 00:30:14.526 Compare Command: Supported 00:30:14.526 Write Uncorrectable Command: Not Supported 00:30:14.526 Dataset Management Command: Supported 00:30:14.526 Write Zeroes Command: Supported 00:30:14.526 Set Features Save Field: Supported 00:30:14.526 Reservations: Not Supported 00:30:14.526 Timestamp: Supported 00:30:14.526 Copy: Supported 00:30:14.526 Volatile Write Cache: Present 00:30:14.526 Atomic Write Unit (Normal): 1 00:30:14.526 Atomic Write Unit (PFail): 1 00:30:14.526 Atomic Compare & Write Unit: 1 00:30:14.526 Fused Compare & Write: Not Supported 00:30:14.526 Scatter-Gather List 00:30:14.526 SGL Command Set: Supported 00:30:14.526 SGL Keyed: Not Supported 00:30:14.526 SGL Bit Bucket Descriptor: Not Supported 00:30:14.526 SGL Metadata Pointer: Not Supported 00:30:14.526 Oversized SGL: Not Supported 00:30:14.526 SGL Metadata Address: Not Supported 00:30:14.526 SGL Offset: Not Supported 00:30:14.526 Transport SGL Data Block: Not Supported 00:30:14.526 Replay Protected Memory Block: Not Supported 00:30:14.526 00:30:14.526 Firmware Slot Information 00:30:14.526 ========================= 00:30:14.526 Active slot: 1 00:30:14.526 Slot 1 Firmware Revision: 1.0 00:30:14.526 00:30:14.526 00:30:14.526 Commands Supported and Effects 00:30:14.526 ============================== 00:30:14.526 Admin Commands 00:30:14.526 -------------- 00:30:14.526 Delete I/O Submission Queue (00h): Supported 00:30:14.526 Create I/O Submission Queue (01h): Supported 00:30:14.526 Get Log Page (02h): Supported 00:30:14.526 Delete I/O Completion Queue (04h): Supported 00:30:14.526 Create I/O Completion Queue (05h): Supported 00:30:14.526 Identify (06h): Supported 00:30:14.526 Abort (08h): Supported 00:30:14.526 Set Features (09h): Supported 00:30:14.526 Get Features (0Ah): Supported 00:30:14.526 Asynchronous Event Request (0Ch): Supported 00:30:14.526 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.526 Directive Send (19h): Supported 00:30:14.526 Directive Receive (1Ah): Supported 00:30:14.526 Virtualization Management (1Ch): Supported 00:30:14.526 Doorbell Buffer Config (7Ch): Supported 00:30:14.526 Format NVM (80h): Supported LBA-Change 00:30:14.526 I/O Commands 00:30:14.526 ------------ 00:30:14.526 Flush (00h): Supported LBA-Change 00:30:14.526 Write (01h): Supported LBA-Change 00:30:14.526 Read (02h): Supported 00:30:14.526 Compare (05h): Supported 00:30:14.526 Write Zeroes (08h): Supported LBA-Change 00:30:14.526 Dataset Management (09h): Supported LBA-Change 00:30:14.526 Unknown (0Ch): Supported 00:30:14.526 Unknown (12h): Supported 00:30:14.526 Copy (19h): Supported LBA-Change 00:30:14.526 Unknown (1Dh): Supported LBA-Change 00:30:14.526 00:30:14.526 Error Log 00:30:14.526 ========= 00:30:14.526 00:30:14.526 Arbitration 00:30:14.526 =========== 00:30:14.526 Arbitration Burst: no limit 00:30:14.526 00:30:14.526 Power Management 00:30:14.526 ================ 00:30:14.526 Number of Power States: 1 00:30:14.526 Current Power State: Power State #0 00:30:14.526 Power State #0: 00:30:14.526 Max Power: 25.00 W 00:30:14.526 Non-Operational State: Operational 00:30:14.526 Entry Latency: 16 microseconds 00:30:14.526 Exit Latency: 4 microseconds 00:30:14.526 Relative Read Throughput: 0 00:30:14.526 Relative Read Latency: 0 00:30:14.526 Relative Write Throughput: 0 00:30:14.526 Relative Write Latency: 0 00:30:14.526 Idle Power[2024-11-20 05:41:34.374336] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64631 terminated unexpected 00:30:14.526 : Not Reported 00:30:14.526 Active Power: Not Reported 00:30:14.526 Non-Operational Permissive Mode: Not Supported 00:30:14.526 00:30:14.526 Health Information 00:30:14.526 ================== 00:30:14.526 Critical Warnings: 00:30:14.526 Available Spare Space: OK 00:30:14.526 Temperature: OK 00:30:14.526 Device Reliability: OK 00:30:14.526 Read Only: No 00:30:14.526 Volatile Memory Backup: OK 00:30:14.526 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.526 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.526 Available Spare: 0% 00:30:14.526 Available Spare Threshold: 0% 00:30:14.526 Life Percentage Used: 0% 00:30:14.526 Data Units Read: 760 00:30:14.526 Data Units Written: 688 00:30:14.526 Host Read Commands: 33673 00:30:14.526 Host Write Commands: 33459 00:30:14.526 Controller Busy Time: 0 minutes 00:30:14.526 Power Cycles: 0 00:30:14.526 Power On Hours: 0 hours 00:30:14.526 Unsafe Shutdowns: 0 00:30:14.526 Unrecoverable Media Errors: 0 00:30:14.526 Lifetime Error Log Entries: 0 00:30:14.526 Warning Temperature Time: 0 minutes 00:30:14.526 Critical Temperature Time: 0 minutes 00:30:14.526 00:30:14.526 Number of Queues 00:30:14.526 ================ 00:30:14.526 Number of I/O Submission Queues: 64 00:30:14.526 Number of I/O Completion Queues: 64 00:30:14.526 00:30:14.526 ZNS Specific Controller Data 00:30:14.526 ============================ 00:30:14.526 Zone Append Size Limit: 0 00:30:14.526 00:30:14.526 00:30:14.526 Active Namespaces 00:30:14.526 ================= 00:30:14.526 Namespace ID:1 00:30:14.526 Error Recovery Timeout: Unlimited 00:30:14.526 Command Set Identifier: NVM (00h) 00:30:14.526 Deallocate: Supported 00:30:14.526 Deallocated/Unwritten Error: Supported 00:30:14.526 Deallocated Read Value: All 0x00 00:30:14.526 Deallocate in Write Zeroes: Not Supported 00:30:14.526 Deallocated Guard Field: 0xFFFF 00:30:14.526 Flush: Supported 00:30:14.526 Reservation: Not Supported 00:30:14.526 Metadata Transferred as: Separate Metadata Buffer 00:30:14.526 Namespace Sharing Capabilities: Private 00:30:14.526 Size (in LBAs): 1548666 (5GiB) 00:30:14.526 Capacity (in LBAs): 1548666 (5GiB) 00:30:14.526 Utilization (in LBAs): 1548666 (5GiB) 00:30:14.526 Thin Provisioning: Not Supported 00:30:14.526 Per-NS Atomic Units: No 00:30:14.526 Maximum Single Source Range Length: 128 00:30:14.526 Maximum Copy Length: 128 00:30:14.526 Maximum Source Range Count: 128 00:30:14.526 NGUID/EUI64 Never Reused: No 00:30:14.526 Namespace Write Protected: No 00:30:14.526 Number of LBA Formats: 8 00:30:14.526 Current LBA Format: LBA Format #07 00:30:14.526 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.526 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.526 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.526 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.526 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.526 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.526 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.526 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.526 00:30:14.526 NVM Specific Namespace Data 00:30:14.526 =========================== 00:30:14.526 Logical Block Storage Tag Mask: 0 00:30:14.526 Protection Information Capabilities: 00:30:14.526 16b Guard Protection Information Storage Tag Support: No 00:30:14.526 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.527 Storage Tag Check Read Support: No 00:30:14.527 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.527 ===================================================== 00:30:14.527 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:14.527 ===================================================== 00:30:14.527 Controller Capabilities/Features 00:30:14.527 ================================ 00:30:14.527 Vendor ID: 1b36 00:30:14.527 Subsystem Vendor ID: 1af4 00:30:14.527 Serial Number: 12341 00:30:14.527 Model Number: QEMU NVMe Ctrl 00:30:14.527 Firmware Version: 8.0.0 00:30:14.527 Recommended Arb Burst: 6 00:30:14.527 IEEE OUI Identifier: 00 54 52 00:30:14.527 Multi-path I/O 00:30:14.527 May have multiple subsystem ports: No 00:30:14.527 May have multiple controllers: No 00:30:14.527 Associated with SR-IOV VF: No 00:30:14.527 Max Data Transfer Size: 524288 00:30:14.527 Max Number of Namespaces: 256 00:30:14.527 Max Number of I/O Queues: 64 00:30:14.527 NVMe Specification Version (VS): 1.4 00:30:14.527 NVMe Specification Version (Identify): 1.4 00:30:14.527 Maximum Queue Entries: 2048 00:30:14.527 Contiguous Queues Required: Yes 00:30:14.527 Arbitration Mechanisms Supported 00:30:14.527 Weighted Round Robin: Not Supported 00:30:14.527 Vendor Specific: Not Supported 00:30:14.527 Reset Timeout: 7500 ms 00:30:14.527 Doorbell Stride: 4 bytes 00:30:14.527 NVM Subsystem Reset: Not Supported 00:30:14.527 Command Sets Supported 00:30:14.527 NVM Command Set: Supported 00:30:14.527 Boot Partition: Not Supported 00:30:14.527 Memory Page Size Minimum: 4096 bytes 00:30:14.527 Memory Page Size Maximum: 65536 bytes 00:30:14.527 Persistent Memory Region: Not Supported 00:30:14.527 Optional Asynchronous Events Supported 00:30:14.527 Namespace Attribute Notices: Supported 00:30:14.527 Firmware Activation Notices: Not Supported 00:30:14.527 ANA Change Notices: Not Supported 00:30:14.527 PLE Aggregate Log Change Notices: Not Supported 00:30:14.527 LBA Status Info Alert Notices: Not Supported 00:30:14.527 EGE Aggregate Log Change Notices: Not Supported 00:30:14.527 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.527 Zone Descriptor Change Notices: Not Supported 00:30:14.527 Discovery Log Change Notices: Not Supported 00:30:14.527 Controller Attributes 00:30:14.527 128-bit Host Identifier: Not Supported 00:30:14.527 Non-Operational Permissive Mode: Not Supported 00:30:14.527 NVM Sets: Not Supported 00:30:14.527 Read Recovery Levels: Not Supported 00:30:14.527 Endurance Groups: Not Supported 00:30:14.527 Predictable Latency Mode: Not Supported 00:30:14.527 Traffic Based Keep ALive: Not Supported 00:30:14.527 Namespace Granularity: Not Supported 00:30:14.527 SQ Associations: Not Supported 00:30:14.527 UUID List: Not Supported 00:30:14.527 Multi-Domain Subsystem: Not Supported 00:30:14.527 Fixed Capacity Management: Not Supported 00:30:14.527 Variable Capacity Management: Not Supported 00:30:14.527 Delete Endurance Group: Not Supported 00:30:14.527 Delete NVM Set: Not Supported 00:30:14.527 Extended LBA Formats Supported: Supported 00:30:14.527 Flexible Data Placement Supported: Not Supported 00:30:14.527 00:30:14.527 Controller Memory Buffer Support 00:30:14.527 ================================ 00:30:14.527 Supported: No 00:30:14.527 00:30:14.527 Persistent Memory Region Support 00:30:14.527 ================================ 00:30:14.527 Supported: No 00:30:14.527 00:30:14.527 Admin Command Set Attributes 00:30:14.527 ============================ 00:30:14.527 Security Send/Receive: Not Supported 00:30:14.527 Format NVM: Supported 00:30:14.527 Firmware Activate/Download: Not Supported 00:30:14.527 Namespace Management: Supported 00:30:14.527 Device Self-Test: Not Supported 00:30:14.527 Directives: Supported 00:30:14.527 NVMe-MI: Not Supported 00:30:14.527 Virtualization Management: Not Supported 00:30:14.527 Doorbell Buffer Config: Supported 00:30:14.527 Get LBA Status Capability: Not Supported 00:30:14.527 Command & Feature Lockdown Capability: Not Supported 00:30:14.527 Abort Command Limit: 4 00:30:14.527 Async Event Request Limit: 4 00:30:14.527 Number of Firmware Slots: N/A 00:30:14.527 Firmware Slot 1 Read-Only: N/A 00:30:14.527 Firmware Activation Without Reset: N/A 00:30:14.527 Multiple Update Detection Support: N/A 00:30:14.527 Firmware Update Granularity: No Information Provided 00:30:14.527 Per-Namespace SMART Log: Yes 00:30:14.527 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.527 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:14.527 Command Effects Log Page: Supported 00:30:14.527 Get Log Page Extended Data: Supported 00:30:14.527 Telemetry Log Pages: Not Supported 00:30:14.527 Persistent Event Log Pages: Not Supported 00:30:14.527 Supported Log Pages Log Page: May Support 00:30:14.527 Commands Supported & Effects Log Page: Not Supported 00:30:14.527 Feature Identifiers & Effects Log Page:May Support 00:30:14.527 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.527 Data Area 4 for Telemetry Log: Not Supported 00:30:14.527 Error Log Page Entries Supported: 1 00:30:14.527 Keep Alive: Not Supported 00:30:14.527 00:30:14.527 NVM Command Set Attributes 00:30:14.527 ========================== 00:30:14.527 Submission Queue Entry Size 00:30:14.527 Max: 64 00:30:14.527 Min: 64 00:30:14.527 Completion Queue Entry Size 00:30:14.527 Max: 16 00:30:14.527 Min: 16 00:30:14.527 Number of Namespaces: 256 00:30:14.527 Compare Command: Supported 00:30:14.527 Write Uncorrectable Command: Not Supported 00:30:14.527 Dataset Management Command: Supported 00:30:14.527 Write Zeroes Command: Supported 00:30:14.527 Set Features Save Field: Supported 00:30:14.527 Reservations: Not Supported 00:30:14.527 Timestamp: Supported 00:30:14.527 Copy: Supported 00:30:14.527 Volatile Write Cache: Present 00:30:14.527 Atomic Write Unit (Normal): 1 00:30:14.527 Atomic Write Unit (PFail): 1 00:30:14.527 Atomic Compare & Write Unit: 1 00:30:14.527 Fused Compare & Write: Not Supported 00:30:14.527 Scatter-Gather List 00:30:14.527 SGL Command Set: Supported 00:30:14.527 SGL Keyed: Not Supported 00:30:14.527 SGL Bit Bucket Descriptor: Not Supported 00:30:14.527 SGL Metadata Pointer: Not Supported 00:30:14.527 Oversized SGL: Not Supported 00:30:14.527 SGL Metadata Address: Not Supported 00:30:14.527 SGL Offset: Not Supported 00:30:14.527 Transport SGL Data Block: Not Supported 00:30:14.527 Replay Protected Memory Block: Not Supported 00:30:14.527 00:30:14.527 Firmware Slot Information 00:30:14.527 ========================= 00:30:14.527 Active slot: 1 00:30:14.527 Slot 1 Firmware Revision: 1.0 00:30:14.527 00:30:14.527 00:30:14.527 Commands Supported and Effects 00:30:14.527 ============================== 00:30:14.527 Admin Commands 00:30:14.527 -------------- 00:30:14.527 Delete I/O Submission Queue (00h): Supported 00:30:14.527 Create I/O Submission Queue (01h): Supported 00:30:14.527 Get Log Page (02h): Supported 00:30:14.527 Delete I/O Completion Queue (04h): Supported 00:30:14.527 Create I/O Completion Queue (05h): Supported 00:30:14.527 Identify (06h): Supported 00:30:14.527 Abort (08h): Supported 00:30:14.527 Set Features (09h): Supported 00:30:14.527 Get Features (0Ah): Supported 00:30:14.527 Asynchronous Event Request (0Ch): Supported 00:30:14.527 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.527 Directive Send (19h): Supported 00:30:14.527 Directive Receive (1Ah): Supported 00:30:14.527 Virtualization Management (1Ch): Supported 00:30:14.527 Doorbell Buffer Config (7Ch): Supported 00:30:14.527 Format NVM (80h): Supported LBA-Change 00:30:14.527 I/O Commands 00:30:14.527 ------------ 00:30:14.527 Flush (00h): Supported LBA-Change 00:30:14.527 Write (01h): Supported LBA-Change 00:30:14.527 Read (02h): Supported 00:30:14.528 Compare (05h): Supported 00:30:14.528 Write Zeroes (08h): Supported LBA-Change 00:30:14.528 Dataset Management (09h): Supported LBA-Change 00:30:14.528 Unknown (0Ch): Supported 00:30:14.528 Unknown (12h): Supported 00:30:14.528 Copy (19h): Supported LBA-Change 00:30:14.528 Unknown (1Dh): Supported LBA-Change 00:30:14.528 00:30:14.528 Error Log 00:30:14.528 ========= 00:30:14.528 00:30:14.528 Arbitration 00:30:14.528 =========== 00:30:14.528 Arbitration Burst: no limit 00:30:14.528 00:30:14.528 Power Management 00:30:14.528 ================ 00:30:14.528 Number of Power States: 1 00:30:14.528 Current Power State: Power State #0 00:30:14.528 Power State #0: 00:30:14.528 Max Power: 25.00 W 00:30:14.528 Non-Operational State: Operational 00:30:14.528 Entry Latency: 16 microseconds 00:30:14.528 Exit Latency: 4 microseconds 00:30:14.528 Relative Read Throughput: 0 00:30:14.528 Relative Read Latency: 0 00:30:14.528 Relative Write Throughput: 0 00:30:14.528 Relative Write Latency: 0 00:30:14.528 Idle Power: Not Reported 00:30:14.528 Active Power: Not Reported 00:30:14.528 Non-Operational Permissive Mode: Not Supported 00:30:14.528 00:30:14.528 Health Information 00:30:14.528 ================== 00:30:14.528 Critical Warnings: 00:30:14.528 Available Spare Space: OK 00:30:14.528 Temperature: [2024-11-20 05:41:34.375012] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64631 terminated unexpected 00:30:14.528 OK 00:30:14.528 Device Reliability: OK 00:30:14.528 Read Only: No 00:30:14.528 Volatile Memory Backup: OK 00:30:14.528 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.528 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.528 Available Spare: 0% 00:30:14.528 Available Spare Threshold: 0% 00:30:14.528 Life Percentage Used: 0% 00:30:14.528 Data Units Read: 1187 00:30:14.528 Data Units Written: 1048 00:30:14.528 Host Read Commands: 50359 00:30:14.528 Host Write Commands: 49056 00:30:14.528 Controller Busy Time: 0 minutes 00:30:14.528 Power Cycles: 0 00:30:14.528 Power On Hours: 0 hours 00:30:14.528 Unsafe Shutdowns: 0 00:30:14.528 Unrecoverable Media Errors: 0 00:30:14.528 Lifetime Error Log Entries: 0 00:30:14.528 Warning Temperature Time: 0 minutes 00:30:14.528 Critical Temperature Time: 0 minutes 00:30:14.528 00:30:14.528 Number of Queues 00:30:14.528 ================ 00:30:14.528 Number of I/O Submission Queues: 64 00:30:14.528 Number of I/O Completion Queues: 64 00:30:14.528 00:30:14.528 ZNS Specific Controller Data 00:30:14.528 ============================ 00:30:14.528 Zone Append Size Limit: 0 00:30:14.528 00:30:14.528 00:30:14.528 Active Namespaces 00:30:14.528 ================= 00:30:14.528 Namespace ID:1 00:30:14.528 Error Recovery Timeout: Unlimited 00:30:14.528 Command Set Identifier: NVM (00h) 00:30:14.528 Deallocate: Supported 00:30:14.528 Deallocated/Unwritten Error: Supported 00:30:14.528 Deallocated Read Value: All 0x00 00:30:14.528 Deallocate in Write Zeroes: Not Supported 00:30:14.528 Deallocated Guard Field: 0xFFFF 00:30:14.528 Flush: Supported 00:30:14.528 Reservation: Not Supported 00:30:14.528 Namespace Sharing Capabilities: Private 00:30:14.528 Size (in LBAs): 1310720 (5GiB) 00:30:14.528 Capacity (in LBAs): 1310720 (5GiB) 00:30:14.528 Utilization (in LBAs): 1310720 (5GiB) 00:30:14.528 Thin Provisioning: Not Supported 00:30:14.528 Per-NS Atomic Units: No 00:30:14.528 Maximum Single Source Range Length: 128 00:30:14.528 Maximum Copy Length: 128 00:30:14.528 Maximum Source Range Count: 128 00:30:14.528 NGUID/EUI64 Never Reused: No 00:30:14.528 Namespace Write Protected: No 00:30:14.528 Number of LBA Formats: 8 00:30:14.528 Current LBA Format: LBA Format #04 00:30:14.528 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.528 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.528 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.528 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.528 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.528 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.528 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.528 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.528 00:30:14.528 NVM Specific Namespace Data 00:30:14.528 =========================== 00:30:14.528 Logical Block Storage Tag Mask: 0 00:30:14.528 Protection Information Capabilities: 00:30:14.528 16b Guard Protection Information Storage Tag Support: No 00:30:14.528 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.528 Storage Tag Check Read Support: No 00:30:14.528 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.528 ===================================================== 00:30:14.528 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:14.528 ===================================================== 00:30:14.528 Controller Capabilities/Features 00:30:14.528 ================================ 00:30:14.528 Vendor ID: 1b36 00:30:14.528 Subsystem Vendor ID: 1af4 00:30:14.528 Serial Number: 12343 00:30:14.528 Model Number: QEMU NVMe Ctrl 00:30:14.528 Firmware Version: 8.0.0 00:30:14.528 Recommended Arb Burst: 6 00:30:14.528 IEEE OUI Identifier: 00 54 52 00:30:14.528 Multi-path I/O 00:30:14.528 May have multiple subsystem ports: No 00:30:14.528 May have multiple controllers: Yes 00:30:14.528 Associated with SR-IOV VF: No 00:30:14.528 Max Data Transfer Size: 524288 00:30:14.528 Max Number of Namespaces: 256 00:30:14.528 Max Number of I/O Queues: 64 00:30:14.528 NVMe Specification Version (VS): 1.4 00:30:14.528 NVMe Specification Version (Identify): 1.4 00:30:14.528 Maximum Queue Entries: 2048 00:30:14.528 Contiguous Queues Required: Yes 00:30:14.528 Arbitration Mechanisms Supported 00:30:14.528 Weighted Round Robin: Not Supported 00:30:14.528 Vendor Specific: Not Supported 00:30:14.528 Reset Timeout: 7500 ms 00:30:14.528 Doorbell Stride: 4 bytes 00:30:14.528 NVM Subsystem Reset: Not Supported 00:30:14.528 Command Sets Supported 00:30:14.528 NVM Command Set: Supported 00:30:14.528 Boot Partition: Not Supported 00:30:14.528 Memory Page Size Minimum: 4096 bytes 00:30:14.528 Memory Page Size Maximum: 65536 bytes 00:30:14.528 Persistent Memory Region: Not Supported 00:30:14.528 Optional Asynchronous Events Supported 00:30:14.528 Namespace Attribute Notices: Supported 00:30:14.528 Firmware Activation Notices: Not Supported 00:30:14.528 ANA Change Notices: Not Supported 00:30:14.528 PLE Aggregate Log Change Notices: Not Supported 00:30:14.528 LBA Status Info Alert Notices: Not Supported 00:30:14.528 EGE Aggregate Log Change Notices: Not Supported 00:30:14.528 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.528 Zone Descriptor Change Notices: Not Supported 00:30:14.528 Discovery Log Change Notices: Not Supported 00:30:14.528 Controller Attributes 00:30:14.528 128-bit Host Identifier: Not Supported 00:30:14.528 Non-Operational Permissive Mode: Not Supported 00:30:14.528 NVM Sets: Not Supported 00:30:14.528 Read Recovery Levels: Not Supported 00:30:14.528 Endurance Groups: Supported 00:30:14.528 Predictable Latency Mode: Not Supported 00:30:14.528 Traffic Based Keep ALive: Not Supported 00:30:14.528 Namespace Granularity: Not Supported 00:30:14.528 SQ Associations: Not Supported 00:30:14.528 UUID List: Not Supported 00:30:14.528 Multi-Domain Subsystem: Not Supported 00:30:14.528 Fixed Capacity Management: Not Supported 00:30:14.528 Variable Capacity Management: Not Supported 00:30:14.528 Delete Endurance Group: Not Supported 00:30:14.528 Delete NVM Set: Not Supported 00:30:14.528 Extended LBA Formats Supported: Supported 00:30:14.528 Flexible Data Placement Supported: Supported 00:30:14.528 00:30:14.528 Controller Memory Buffer Support 00:30:14.528 ================================ 00:30:14.528 Supported: No 00:30:14.528 00:30:14.528 Persistent Memory Region Support 00:30:14.528 ================================ 00:30:14.528 Supported: No 00:30:14.528 00:30:14.528 Admin Command Set Attributes 00:30:14.529 ============================ 00:30:14.529 Security Send/Receive: Not Supported 00:30:14.529 Format NVM: Supported 00:30:14.529 Firmware Activate/Download: Not Supported 00:30:14.529 Namespace Management: Supported 00:30:14.529 Device Self-Test: Not Supported 00:30:14.529 Directives: Supported 00:30:14.529 NVMe-MI: Not Supported 00:30:14.529 Virtualization Management: Not Supported 00:30:14.529 Doorbell Buffer Config: Supported 00:30:14.529 Get LBA Status Capability: Not Supported 00:30:14.529 Command & Feature Lockdown Capability: Not Supported 00:30:14.529 Abort Command Limit: 4 00:30:14.529 Async Event Request Limit: 4 00:30:14.529 Number of Firmware Slots: N/A 00:30:14.529 Firmware Slot 1 Read-Only: N/A 00:30:14.529 Firmware Activation Without Reset: N/A 00:30:14.529 Multiple Update Detection Support: N/A 00:30:14.529 Firmware Update Granularity: No Information Provided 00:30:14.529 Per-Namespace SMART Log: Yes 00:30:14.529 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.529 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:14.529 Command Effects Log Page: Supported 00:30:14.529 Get Log Page Extended Data: Supported 00:30:14.529 Telemetry Log Pages: Not Supported 00:30:14.529 Persistent Event Log Pages: Not Supported 00:30:14.529 Supported Log Pages Log Page: May Support 00:30:14.529 Commands Supported & Effects Log Page: Not Supported 00:30:14.529 Feature Identifiers & Effects Log Page:May Support 00:30:14.529 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.529 Data Area 4 for Telemetry Log: Not Supported 00:30:14.529 Error Log Page Entries Supported: 1 00:30:14.529 Keep Alive: Not Supported 00:30:14.529 00:30:14.529 NVM Command Set Attributes 00:30:14.529 ========================== 00:30:14.529 Submission Queue Entry Size 00:30:14.529 Max: 64 00:30:14.529 Min: 64 00:30:14.529 Completion Queue Entry Size 00:30:14.529 Max: 16 00:30:14.529 Min: 16 00:30:14.529 Number of Namespaces: 256 00:30:14.529 Compare Command: Supported 00:30:14.529 Write Uncorrectable Command: Not Supported 00:30:14.529 Dataset Management Command: Supported 00:30:14.529 Write Zeroes Command: Supported 00:30:14.529 Set Features Save Field: Supported 00:30:14.529 Reservations: Not Supported 00:30:14.529 Timestamp: Supported 00:30:14.529 Copy: Supported 00:30:14.529 Volatile Write Cache: Present 00:30:14.529 Atomic Write Unit (Normal): 1 00:30:14.529 Atomic Write Unit (PFail): 1 00:30:14.529 Atomic Compare & Write Unit: 1 00:30:14.529 Fused Compare & Write: Not Supported 00:30:14.529 Scatter-Gather List 00:30:14.529 SGL Command Set: Supported 00:30:14.529 SGL Keyed: Not Supported 00:30:14.529 SGL Bit Bucket Descriptor: Not Supported 00:30:14.529 SGL Metadata Pointer: Not Supported 00:30:14.529 Oversized SGL: Not Supported 00:30:14.529 SGL Metadata Address: Not Supported 00:30:14.529 SGL Offset: Not Supported 00:30:14.529 Transport SGL Data Block: Not Supported 00:30:14.529 Replay Protected Memory Block: Not Supported 00:30:14.529 00:30:14.529 Firmware Slot Information 00:30:14.529 ========================= 00:30:14.529 Active slot: 1 00:30:14.529 Slot 1 Firmware Revision: 1.0 00:30:14.529 00:30:14.529 00:30:14.529 Commands Supported and Effects 00:30:14.529 ============================== 00:30:14.529 Admin Commands 00:30:14.529 -------------- 00:30:14.529 Delete I/O Submission Queue (00h): Supported 00:30:14.529 Create I/O Submission Queue (01h): Supported 00:30:14.529 Get Log Page (02h): Supported 00:30:14.529 Delete I/O Completion Queue (04h): Supported 00:30:14.529 Create I/O Completion Queue (05h): Supported 00:30:14.529 Identify (06h): Supported 00:30:14.529 Abort (08h): Supported 00:30:14.529 Set Features (09h): Supported 00:30:14.529 Get Features (0Ah): Supported 00:30:14.529 Asynchronous Event Request (0Ch): Supported 00:30:14.529 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.529 Directive Send (19h): Supported 00:30:14.529 Directive Receive (1Ah): Supported 00:30:14.529 Virtualization Management (1Ch): Supported 00:30:14.529 Doorbell Buffer Config (7Ch): Supported 00:30:14.529 Format NVM (80h): Supported LBA-Change 00:30:14.529 I/O Commands 00:30:14.529 ------------ 00:30:14.529 Flush (00h): Supported LBA-Change 00:30:14.529 Write (01h): Supported LBA-Change 00:30:14.529 Read (02h): Supported 00:30:14.529 Compare (05h): Supported 00:30:14.529 Write Zeroes (08h): Supported LBA-Change 00:30:14.529 Dataset Management (09h): Supported LBA-Change 00:30:14.529 Unknown (0Ch): Supported 00:30:14.529 Unknown (12h): Supported 00:30:14.529 Copy (19h): Supported LBA-Change 00:30:14.529 Unknown (1Dh): Supported LBA-Change 00:30:14.529 00:30:14.529 Error Log 00:30:14.529 ========= 00:30:14.529 00:30:14.529 Arbitration 00:30:14.529 =========== 00:30:14.529 Arbitration Burst: no limit 00:30:14.529 00:30:14.529 Power Management 00:30:14.529 ================ 00:30:14.529 Number of Power States: 1 00:30:14.529 Current Power State: Power State #0 00:30:14.529 Power State #0: 00:30:14.529 Max Power: 25.00 W 00:30:14.529 Non-Operational State: Operational 00:30:14.529 Entry Latency: 16 microseconds 00:30:14.529 Exit Latency: 4 microseconds 00:30:14.529 Relative Read Throughput: 0 00:30:14.529 Relative Read Latency: 0 00:30:14.529 Relative Write Throughput: 0 00:30:14.529 Relative Write Latency: 0 00:30:14.529 Idle Power: Not Reported 00:30:14.529 Active Power: Not Reported 00:30:14.529 Non-Operational Permissive Mode: Not Supported 00:30:14.529 00:30:14.529 Health Information 00:30:14.529 ================== 00:30:14.529 Critical Warnings: 00:30:14.529 Available Spare Space: OK 00:30:14.529 Temperature: OK 00:30:14.529 Device Reliability: OK 00:30:14.529 Read Only: No 00:30:14.529 Volatile Memory Backup: OK 00:30:14.529 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.529 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.529 Available Spare: 0% 00:30:14.529 Available Spare Threshold: 0% 00:30:14.529 Life Percentage Used: 0% 00:30:14.529 Data Units Read: 845 00:30:14.529 Data Units Written: 774 00:30:14.529 Host Read Commands: 34760 00:30:14.529 Host Write Commands: 34183 00:30:14.529 Controller Busy Time: 0 minutes 00:30:14.529 Power Cycles: 0 00:30:14.529 Power On Hours: 0 hours 00:30:14.529 Unsafe Shutdowns: 0 00:30:14.529 Unrecoverable Media Errors: 0 00:30:14.529 Lifetime Error Log Entries: 0 00:30:14.529 Warning Temperature Time: 0 minutes 00:30:14.529 Critical Temperature Time: 0 minutes 00:30:14.529 00:30:14.529 Number of Queues 00:30:14.529 ================ 00:30:14.529 Number of I/O Submission Queues: 64 00:30:14.529 Number of I/O Completion Queues: 64 00:30:14.529 00:30:14.529 ZNS Specific Controller Data 00:30:14.529 ============================ 00:30:14.529 Zone Append Size Limit: 0 00:30:14.529 00:30:14.529 00:30:14.529 Active Namespaces 00:30:14.529 ================= 00:30:14.529 Namespace ID:1 00:30:14.529 Error Recovery Timeout: Unlimited 00:30:14.529 Command Set Identifier: NVM (00h) 00:30:14.529 Deallocate: Supported 00:30:14.529 Deallocated/Unwritten Error: Supported 00:30:14.529 Deallocated Read Value: All 0x00 00:30:14.529 Deallocate in Write Zeroes: Not Supported 00:30:14.529 Deallocated Guard Field: 0xFFFF 00:30:14.529 Flush: Supported 00:30:14.529 Reservation: Not Supported 00:30:14.529 Namespace Sharing Capabilities: Multiple Controllers 00:30:14.529 Size (in LBAs): 262144 (1GiB) 00:30:14.529 Capacity (in LBAs): 262144 (1GiB) 00:30:14.529 Utilization (in LBAs): 262144 (1GiB) 00:30:14.529 Thin Provisioning: Not Supported 00:30:14.529 Per-NS Atomic Units: No 00:30:14.529 Maximum Single Source Range Length: 128 00:30:14.529 Maximum Copy Length: 128 00:30:14.529 Maximum Source Range Count: 128 00:30:14.529 NGUID/EUI64 Never Reused: No 00:30:14.529 Namespace Write Protected: No 00:30:14.529 Endurance group ID: 1 00:30:14.529 Number of LBA Formats: 8 00:30:14.529 Current LBA Format: LBA Format #04 00:30:14.529 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.529 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.529 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.529 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.529 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.529 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.530 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.530 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.530 00:30:14.530 Get Feature FDP: 00:30:14.530 ================ 00:30:14.530 Enabled: Yes 00:30:14.530 FDP configuration index: 0 00:30:14.530 00:30:14.530 FDP configurations log page 00:30:14.530 =========================== 00:30:14.530 Number of FDP configurations: 1 00:30:14.530 Version: 0 00:30:14.530 Size: 112 00:30:14.530 FDP Configuration Descriptor: 0 00:30:14.530 Descriptor Size: 96 00:30:14.530 Reclaim Group Identifier format: 2 00:30:14.530 FDP Volatile Write Cache: Not Present 00:30:14.530 FDP Configuration: Valid 00:30:14.530 Vendor Specific Size: 0 00:30:14.530 Number of Reclaim Groups: 2 00:30:14.530 Number of Recalim Unit Handles: 8 00:30:14.530 Max Placement Identifiers: 128 00:30:14.530 Number of Namespaces Suppprted: 256 00:30:14.530 Reclaim unit Nominal Size: 6000000 bytes 00:30:14.530 Estimated Reclaim Unit Time Limit: Not Reported 00:30:14.530 RUH Desc #000: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #001: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #002: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #003: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #004: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #005: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #006: RUH Type: Initially Isolated 00:30:14.530 RUH Desc #007: RUH Type: Initially Isolated 00:30:14.530 00:30:14.530 FDP reclaim unit handle usage log page 00:30:14.530 ====================================== 00:30:14.530 Number of Reclaim Unit Handles: 8 00:30:14.530 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:14.530 RUH Usage Desc #001: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #002: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #003: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #004: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #005: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #006: RUH Attributes: Unused 00:30:14.530 RUH Usage Desc #007: RUH Attributes: Unused 00:30:14.530 00:30:14.530 FDP statistics log page 00:30:14.530 ======================= 00:30:14.530 Host bytes with metadata written: 486842368 00:30:14.530 Med[2024-11-20 05:41:34.376223] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64631 terminated unexpected 00:30:14.530 ia bytes with metadata written: 486895616 00:30:14.530 Media bytes erased: 0 00:30:14.530 00:30:14.530 FDP events log page 00:30:14.530 =================== 00:30:14.530 Number of FDP events: 0 00:30:14.530 00:30:14.530 NVM Specific Namespace Data 00:30:14.530 =========================== 00:30:14.530 Logical Block Storage Tag Mask: 0 00:30:14.530 Protection Information Capabilities: 00:30:14.530 16b Guard Protection Information Storage Tag Support: No 00:30:14.530 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.530 Storage Tag Check Read Support: No 00:30:14.530 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.530 ===================================================== 00:30:14.530 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:14.530 ===================================================== 00:30:14.530 Controller Capabilities/Features 00:30:14.530 ================================ 00:30:14.530 Vendor ID: 1b36 00:30:14.530 Subsystem Vendor ID: 1af4 00:30:14.530 Serial Number: 12342 00:30:14.530 Model Number: QEMU NVMe Ctrl 00:30:14.530 Firmware Version: 8.0.0 00:30:14.530 Recommended Arb Burst: 6 00:30:14.530 IEEE OUI Identifier: 00 54 52 00:30:14.530 Multi-path I/O 00:30:14.530 May have multiple subsystem ports: No 00:30:14.530 May have multiple controllers: No 00:30:14.530 Associated with SR-IOV VF: No 00:30:14.530 Max Data Transfer Size: 524288 00:30:14.530 Max Number of Namespaces: 256 00:30:14.530 Max Number of I/O Queues: 64 00:30:14.530 NVMe Specification Version (VS): 1.4 00:30:14.530 NVMe Specification Version (Identify): 1.4 00:30:14.530 Maximum Queue Entries: 2048 00:30:14.530 Contiguous Queues Required: Yes 00:30:14.530 Arbitration Mechanisms Supported 00:30:14.530 Weighted Round Robin: Not Supported 00:30:14.530 Vendor Specific: Not Supported 00:30:14.530 Reset Timeout: 7500 ms 00:30:14.530 Doorbell Stride: 4 bytes 00:30:14.530 NVM Subsystem Reset: Not Supported 00:30:14.530 Command Sets Supported 00:30:14.530 NVM Command Set: Supported 00:30:14.530 Boot Partition: Not Supported 00:30:14.530 Memory Page Size Minimum: 4096 bytes 00:30:14.530 Memory Page Size Maximum: 65536 bytes 00:30:14.530 Persistent Memory Region: Not Supported 00:30:14.530 Optional Asynchronous Events Supported 00:30:14.530 Namespace Attribute Notices: Supported 00:30:14.530 Firmware Activation Notices: Not Supported 00:30:14.530 ANA Change Notices: Not Supported 00:30:14.530 PLE Aggregate Log Change Notices: Not Supported 00:30:14.530 LBA Status Info Alert Notices: Not Supported 00:30:14.530 EGE Aggregate Log Change Notices: Not Supported 00:30:14.530 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.530 Zone Descriptor Change Notices: Not Supported 00:30:14.530 Discovery Log Change Notices: Not Supported 00:30:14.530 Controller Attributes 00:30:14.530 128-bit Host Identifier: Not Supported 00:30:14.530 Non-Operational Permissive Mode: Not Supported 00:30:14.530 NVM Sets: Not Supported 00:30:14.530 Read Recovery Levels: Not Supported 00:30:14.530 Endurance Groups: Not Supported 00:30:14.530 Predictable Latency Mode: Not Supported 00:30:14.530 Traffic Based Keep ALive: Not Supported 00:30:14.530 Namespace Granularity: Not Supported 00:30:14.530 SQ Associations: Not Supported 00:30:14.530 UUID List: Not Supported 00:30:14.530 Multi-Domain Subsystem: Not Supported 00:30:14.530 Fixed Capacity Management: Not Supported 00:30:14.530 Variable Capacity Management: Not Supported 00:30:14.530 Delete Endurance Group: Not Supported 00:30:14.530 Delete NVM Set: Not Supported 00:30:14.530 Extended LBA Formats Supported: Supported 00:30:14.530 Flexible Data Placement Supported: Not Supported 00:30:14.530 00:30:14.530 Controller Memory Buffer Support 00:30:14.530 ================================ 00:30:14.530 Supported: No 00:30:14.530 00:30:14.530 Persistent Memory Region Support 00:30:14.530 ================================ 00:30:14.530 Supported: No 00:30:14.530 00:30:14.530 Admin Command Set Attributes 00:30:14.530 ============================ 00:30:14.530 Security Send/Receive: Not Supported 00:30:14.530 Format NVM: Supported 00:30:14.530 Firmware Activate/Download: Not Supported 00:30:14.530 Namespace Management: Supported 00:30:14.530 Device Self-Test: Not Supported 00:30:14.530 Directives: Supported 00:30:14.531 NVMe-MI: Not Supported 00:30:14.531 Virtualization Management: Not Supported 00:30:14.531 Doorbell Buffer Config: Supported 00:30:14.531 Get LBA Status Capability: Not Supported 00:30:14.531 Command & Feature Lockdown Capability: Not Supported 00:30:14.531 Abort Command Limit: 4 00:30:14.531 Async Event Request Limit: 4 00:30:14.531 Number of Firmware Slots: N/A 00:30:14.531 Firmware Slot 1 Read-Only: N/A 00:30:14.531 Firmware Activation Without Reset: N/A 00:30:14.531 Multiple Update Detection Support: N/A 00:30:14.531 Firmware Update Granularity: No Information Provided 00:30:14.531 Per-Namespace SMART Log: Yes 00:30:14.531 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.531 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:14.531 Command Effects Log Page: Supported 00:30:14.531 Get Log Page Extended Data: Supported 00:30:14.531 Telemetry Log Pages: Not Supported 00:30:14.531 Persistent Event Log Pages: Not Supported 00:30:14.531 Supported Log Pages Log Page: May Support 00:30:14.531 Commands Supported & Effects Log Page: Not Supported 00:30:14.531 Feature Identifiers & Effects Log Page:May Support 00:30:14.531 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.531 Data Area 4 for Telemetry Log: Not Supported 00:30:14.531 Error Log Page Entries Supported: 1 00:30:14.531 Keep Alive: Not Supported 00:30:14.531 00:30:14.531 NVM Command Set Attributes 00:30:14.531 ========================== 00:30:14.531 Submission Queue Entry Size 00:30:14.531 Max: 64 00:30:14.531 Min: 64 00:30:14.531 Completion Queue Entry Size 00:30:14.531 Max: 16 00:30:14.531 Min: 16 00:30:14.531 Number of Namespaces: 256 00:30:14.531 Compare Command: Supported 00:30:14.531 Write Uncorrectable Command: Not Supported 00:30:14.531 Dataset Management Command: Supported 00:30:14.531 Write Zeroes Command: Supported 00:30:14.531 Set Features Save Field: Supported 00:30:14.531 Reservations: Not Supported 00:30:14.531 Timestamp: Supported 00:30:14.531 Copy: Supported 00:30:14.531 Volatile Write Cache: Present 00:30:14.531 Atomic Write Unit (Normal): 1 00:30:14.531 Atomic Write Unit (PFail): 1 00:30:14.531 Atomic Compare & Write Unit: 1 00:30:14.531 Fused Compare & Write: Not Supported 00:30:14.531 Scatter-Gather List 00:30:14.531 SGL Command Set: Supported 00:30:14.531 SGL Keyed: Not Supported 00:30:14.531 SGL Bit Bucket Descriptor: Not Supported 00:30:14.531 SGL Metadata Pointer: Not Supported 00:30:14.531 Oversized SGL: Not Supported 00:30:14.531 SGL Metadata Address: Not Supported 00:30:14.531 SGL Offset: Not Supported 00:30:14.531 Transport SGL Data Block: Not Supported 00:30:14.531 Replay Protected Memory Block: Not Supported 00:30:14.531 00:30:14.531 Firmware Slot Information 00:30:14.531 ========================= 00:30:14.531 Active slot: 1 00:30:14.531 Slot 1 Firmware Revision: 1.0 00:30:14.531 00:30:14.531 00:30:14.531 Commands Supported and Effects 00:30:14.531 ============================== 00:30:14.531 Admin Commands 00:30:14.531 -------------- 00:30:14.531 Delete I/O Submission Queue (00h): Supported 00:30:14.531 Create I/O Submission Queue (01h): Supported 00:30:14.531 Get Log Page (02h): Supported 00:30:14.531 Delete I/O Completion Queue (04h): Supported 00:30:14.531 Create I/O Completion Queue (05h): Supported 00:30:14.531 Identify (06h): Supported 00:30:14.531 Abort (08h): Supported 00:30:14.531 Set Features (09h): Supported 00:30:14.531 Get Features (0Ah): Supported 00:30:14.531 Asynchronous Event Request (0Ch): Supported 00:30:14.531 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.531 Directive Send (19h): Supported 00:30:14.531 Directive Receive (1Ah): Supported 00:30:14.531 Virtualization Management (1Ch): Supported 00:30:14.531 Doorbell Buffer Config (7Ch): Supported 00:30:14.531 Format NVM (80h): Supported LBA-Change 00:30:14.531 I/O Commands 00:30:14.531 ------------ 00:30:14.531 Flush (00h): Supported LBA-Change 00:30:14.531 Write (01h): Supported LBA-Change 00:30:14.531 Read (02h): Supported 00:30:14.531 Compare (05h): Supported 00:30:14.531 Write Zeroes (08h): Supported LBA-Change 00:30:14.531 Dataset Management (09h): Supported LBA-Change 00:30:14.531 Unknown (0Ch): Supported 00:30:14.531 Unknown (12h): Supported 00:30:14.531 Copy (19h): Supported LBA-Change 00:30:14.531 Unknown (1Dh): Supported LBA-Change 00:30:14.531 00:30:14.531 Error Log 00:30:14.531 ========= 00:30:14.531 00:30:14.531 Arbitration 00:30:14.531 =========== 00:30:14.531 Arbitration Burst: no limit 00:30:14.531 00:30:14.531 Power Management 00:30:14.531 ================ 00:30:14.531 Number of Power States: 1 00:30:14.531 Current Power State: Power State #0 00:30:14.531 Power State #0: 00:30:14.531 Max Power: 25.00 W 00:30:14.531 Non-Operational State: Operational 00:30:14.531 Entry Latency: 16 microseconds 00:30:14.531 Exit Latency: 4 microseconds 00:30:14.531 Relative Read Throughput: 0 00:30:14.531 Relative Read Latency: 0 00:30:14.531 Relative Write Throughput: 0 00:30:14.531 Relative Write Latency: 0 00:30:14.531 Idle Power: Not Reported 00:30:14.531 Active Power: Not Reported 00:30:14.531 Non-Operational Permissive Mode: Not Supported 00:30:14.531 00:30:14.531 Health Information 00:30:14.531 ================== 00:30:14.531 Critical Warnings: 00:30:14.531 Available Spare Space: OK 00:30:14.531 Temperature: OK 00:30:14.531 Device Reliability: OK 00:30:14.531 Read Only: No 00:30:14.531 Volatile Memory Backup: OK 00:30:14.531 Current Temperature: 323 Kelvin (50 Celsius) 00:30:14.531 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:14.531 Available Spare: 0% 00:30:14.531 Available Spare Threshold: 0% 00:30:14.531 Life Percentage Used: 0% 00:30:14.531 Data Units Read: 2363 00:30:14.531 Data Units Written: 2151 00:30:14.531 Host Read Commands: 102584 00:30:14.531 Host Write Commands: 100854 00:30:14.531 Controller Busy Time: 0 minutes 00:30:14.531 Power Cycles: 0 00:30:14.531 Power On Hours: 0 hours 00:30:14.531 Unsafe Shutdowns: 0 00:30:14.531 Unrecoverable Media Errors: 0 00:30:14.531 Lifetime Error Log Entries: 0 00:30:14.531 Warning Temperature Time: 0 minutes 00:30:14.531 Critical Temperature Time: 0 minutes 00:30:14.531 00:30:14.531 Number of Queues 00:30:14.531 ================ 00:30:14.531 Number of I/O Submission Queues: 64 00:30:14.531 Number of I/O Completion Queues: 64 00:30:14.531 00:30:14.531 ZNS Specific Controller Data 00:30:14.531 ============================ 00:30:14.531 Zone Append Size Limit: 0 00:30:14.531 00:30:14.531 00:30:14.531 Active Namespaces 00:30:14.531 ================= 00:30:14.532 Namespace ID:1 00:30:14.532 Error Recovery Timeout: Unlimited 00:30:14.532 Command Set Identifier: NVM (00h) 00:30:14.532 Deallocate: Supported 00:30:14.532 Deallocated/Unwritten Error: Supported 00:30:14.532 Deallocated Read Value: All 0x00 00:30:14.532 Deallocate in Write Zeroes: Not Supported 00:30:14.532 Deallocated Guard Field: 0xFFFF 00:30:14.532 Flush: Supported 00:30:14.532 Reservation: Not Supported 00:30:14.532 Namespace Sharing Capabilities: Private 00:30:14.532 Size (in LBAs): 1048576 (4GiB) 00:30:14.532 Capacity (in LBAs): 1048576 (4GiB) 00:30:14.532 Utilization (in LBAs): 1048576 (4GiB) 00:30:14.532 Thin Provisioning: Not Supported 00:30:14.532 Per-NS Atomic Units: No 00:30:14.532 Maximum Single Source Range Length: 128 00:30:14.532 Maximum Copy Length: 128 00:30:14.532 Maximum Source Range Count: 128 00:30:14.532 NGUID/EUI64 Never Reused: No 00:30:14.532 Namespace Write Protected: No 00:30:14.532 Number of LBA Formats: 8 00:30:14.532 Current LBA Format: LBA Format #04 00:30:14.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.532 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.532 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.532 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.532 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.532 00:30:14.532 NVM Specific Namespace Data 00:30:14.532 =========================== 00:30:14.532 Logical Block Storage Tag Mask: 0 00:30:14.532 Protection Information Capabilities: 00:30:14.532 16b Guard Protection Information Storage Tag Support: No 00:30:14.532 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.532 Storage Tag Check Read Support: No 00:30:14.532 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Namespace ID:2 00:30:14.532 Error Recovery Timeout: Unlimited 00:30:14.532 Command Set Identifier: NVM (00h) 00:30:14.532 Deallocate: Supported 00:30:14.532 Deallocated/Unwritten Error: Supported 00:30:14.532 Deallocated Read Value: All 0x00 00:30:14.532 Deallocate in Write Zeroes: Not Supported 00:30:14.532 Deallocated Guard Field: 0xFFFF 00:30:14.532 Flush: Supported 00:30:14.532 Reservation: Not Supported 00:30:14.532 Namespace Sharing Capabilities: Private 00:30:14.532 Size (in LBAs): 1048576 (4GiB) 00:30:14.532 Capacity (in LBAs): 1048576 (4GiB) 00:30:14.532 Utilization (in LBAs): 1048576 (4GiB) 00:30:14.532 Thin Provisioning: Not Supported 00:30:14.532 Per-NS Atomic Units: No 00:30:14.532 Maximum Single Source Range Length: 128 00:30:14.532 Maximum Copy Length: 128 00:30:14.532 Maximum Source Range Count: 128 00:30:14.532 NGUID/EUI64 Never Reused: No 00:30:14.532 Namespace Write Protected: No 00:30:14.532 Number of LBA Formats: 8 00:30:14.532 Current LBA Format: LBA Format #04 00:30:14.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.532 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.532 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.532 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.532 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.532 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.532 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.532 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.532 00:30:14.532 NVM Specific Namespace Data 00:30:14.532 =========================== 00:30:14.532 Logical Block Storage Tag Mask: 0 00:30:14.532 Protection Information Capabilities: 00:30:14.532 16b Guard Protection Information Storage Tag Support: No 00:30:14.532 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.532 Storage Tag Check Read Support: No 00:30:14.532 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.532 Namespace ID:3 00:30:14.532 Error Recovery Timeout: Unlimited 00:30:14.532 Command Set Identifier: NVM (00h) 00:30:14.532 Deallocate: Supported 00:30:14.532 Deallocated/Unwritten Error: Supported 00:30:14.532 Deallocated Read Value: All 0x00 00:30:14.532 Deallocate in Write Zeroes: Not Supported 00:30:14.532 Deallocated Guard Field: 0xFFFF 00:30:14.532 Flush: Supported 00:30:14.532 Reservation: Not Supported 00:30:14.532 Namespace Sharing Capabilities: Private 00:30:14.532 Size (in LBAs): 1048576 (4GiB) 00:30:14.532 Capacity (in LBAs): 1048576 (4GiB) 00:30:14.532 Utilization (in LBAs): 1048576 (4GiB) 00:30:14.532 Thin Provisioning: Not Supported 00:30:14.532 Per-NS Atomic Units: No 00:30:14.532 Maximum Single Source Range Length: 128 00:30:14.532 Maximum Copy Length: 128 00:30:14.532 Maximum Source Range Count: 128 00:30:14.532 NGUID/EUI64 Never Reused: No 00:30:14.532 Namespace Write Protected: No 00:30:14.532 Number of LBA Formats: 8 00:30:14.532 Current LBA Format: LBA Format #04 00:30:14.532 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.533 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:14.533 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:14.533 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:14.533 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:14.533 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:14.533 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:14.533 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:14.533 00:30:14.533 NVM Specific Namespace Data 00:30:14.533 =========================== 00:30:14.533 Logical Block Storage Tag Mask: 0 00:30:14.533 Protection Information Capabilities: 00:30:14.533 16b Guard Protection Information Storage Tag Support: No 00:30:14.533 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:14.533 Storage Tag Check Read Support: No 00:30:14.533 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:14.533 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:14.533 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:14.793 ===================================================== 00:30:14.793 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:14.793 ===================================================== 00:30:14.793 Controller Capabilities/Features 00:30:14.793 ================================ 00:30:14.793 Vendor ID: 1b36 00:30:14.793 Subsystem Vendor ID: 1af4 00:30:14.793 Serial Number: 12340 00:30:14.793 Model Number: QEMU NVMe Ctrl 00:30:14.793 Firmware Version: 8.0.0 00:30:14.793 Recommended Arb Burst: 6 00:30:14.793 IEEE OUI Identifier: 00 54 52 00:30:14.793 Multi-path I/O 00:30:14.793 May have multiple subsystem ports: No 00:30:14.793 May have multiple controllers: No 00:30:14.793 Associated with SR-IOV VF: No 00:30:14.793 Max Data Transfer Size: 524288 00:30:14.793 Max Number of Namespaces: 256 00:30:14.793 Max Number of I/O Queues: 64 00:30:14.793 NVMe Specification Version (VS): 1.4 00:30:14.793 NVMe Specification Version (Identify): 1.4 00:30:14.793 Maximum Queue Entries: 2048 00:30:14.793 Contiguous Queues Required: Yes 00:30:14.793 Arbitration Mechanisms Supported 00:30:14.793 Weighted Round Robin: Not Supported 00:30:14.793 Vendor Specific: Not Supported 00:30:14.793 Reset Timeout: 7500 ms 00:30:14.793 Doorbell Stride: 4 bytes 00:30:14.793 NVM Subsystem Reset: Not Supported 00:30:14.793 Command Sets Supported 00:30:14.793 NVM Command Set: Supported 00:30:14.793 Boot Partition: Not Supported 00:30:14.793 Memory Page Size Minimum: 4096 bytes 00:30:14.793 Memory Page Size Maximum: 65536 bytes 00:30:14.793 Persistent Memory Region: Not Supported 00:30:14.793 Optional Asynchronous Events Supported 00:30:14.793 Namespace Attribute Notices: Supported 00:30:14.793 Firmware Activation Notices: Not Supported 00:30:14.793 ANA Change Notices: Not Supported 00:30:14.793 PLE Aggregate Log Change Notices: Not Supported 00:30:14.793 LBA Status Info Alert Notices: Not Supported 00:30:14.793 EGE Aggregate Log Change Notices: Not Supported 00:30:14.793 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.793 Zone Descriptor Change Notices: Not Supported 00:30:14.793 Discovery Log Change Notices: Not Supported 00:30:14.793 Controller Attributes 00:30:14.793 128-bit Host Identifier: Not Supported 00:30:14.793 Non-Operational Permissive Mode: Not Supported 00:30:14.793 NVM Sets: Not Supported 00:30:14.793 Read Recovery Levels: Not Supported 00:30:14.793 Endurance Groups: Not Supported 00:30:14.793 Predictable Latency Mode: Not Supported 00:30:14.793 Traffic Based Keep ALive: Not Supported 00:30:14.793 Namespace Granularity: Not Supported 00:30:14.793 SQ Associations: Not Supported 00:30:14.793 UUID List: Not Supported 00:30:14.793 Multi-Domain Subsystem: Not Supported 00:30:14.793 Fixed Capacity Management: Not Supported 00:30:14.793 Variable Capacity Management: Not Supported 00:30:14.793 Delete Endurance Group: Not Supported 00:30:14.793 Delete NVM Set: Not Supported 00:30:14.793 Extended LBA Formats Supported: Supported 00:30:14.793 Flexible Data Placement Supported: Not Supported 00:30:14.793 00:30:14.793 Controller Memory Buffer Support 00:30:14.793 ================================ 00:30:14.793 Supported: No 00:30:14.793 00:30:14.793 Persistent Memory Region Support 00:30:14.793 ================================ 00:30:14.793 Supported: No 00:30:14.793 00:30:14.793 Admin Command Set Attributes 00:30:14.793 ============================ 00:30:14.793 Security Send/Receive: Not Supported 00:30:14.793 Format NVM: Supported 00:30:14.793 Firmware Activate/Download: Not Supported 00:30:14.793 Namespace Management: Supported 00:30:14.793 Device Self-Test: Not Supported 00:30:14.793 Directives: Supported 00:30:14.793 NVMe-MI: Not Supported 00:30:14.793 Virtualization Management: Not Supported 00:30:14.793 Doorbell Buffer Config: Supported 00:30:14.793 Get LBA Status Capability: Not Supported 00:30:14.793 Command & Feature Lockdown Capability: Not Supported 00:30:14.793 Abort Command Limit: 4 00:30:14.793 Async Event Request Limit: 4 00:30:14.793 Number of Firmware Slots: N/A 00:30:14.793 Firmware Slot 1 Read-Only: N/A 00:30:14.793 Firmware Activation Without Reset: N/A 00:30:14.793 Multiple Update Detection Support: N/A 00:30:14.793 Firmware Update Granularity: No Information Provided 00:30:14.793 Per-Namespace SMART Log: Yes 00:30:14.793 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.793 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:14.793 Command Effects Log Page: Supported 00:30:14.793 Get Log Page Extended Data: Supported 00:30:14.793 Telemetry Log Pages: Not Supported 00:30:14.793 Persistent Event Log Pages: Not Supported 00:30:14.793 Supported Log Pages Log Page: May Support 00:30:14.793 Commands Supported & Effects Log Page: Not Supported 00:30:14.793 Feature Identifiers & Effects Log Page:May Support 00:30:14.793 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.793 Data Area 4 for Telemetry Log: Not Supported 00:30:14.793 Error Log Page Entries Supported: 1 00:30:14.793 Keep Alive: Not Supported 00:30:14.793 00:30:14.793 NVM Command Set Attributes 00:30:14.793 ========================== 00:30:14.793 Submission Queue Entry Size 00:30:14.793 Max: 64 00:30:14.793 Min: 64 00:30:14.793 Completion Queue Entry Size 00:30:14.793 Max: 16 00:30:14.793 Min: 16 00:30:14.793 Number of Namespaces: 256 00:30:14.793 Compare Command: Supported 00:30:14.793 Write Uncorrectable Command: Not Supported 00:30:14.793 Dataset Management Command: Supported 00:30:14.793 Write Zeroes Command: Supported 00:30:14.793 Set Features Save Field: Supported 00:30:14.793 Reservations: Not Supported 00:30:14.793 Timestamp: Supported 00:30:14.793 Copy: Supported 00:30:14.793 Volatile Write Cache: Present 00:30:14.793 Atomic Write Unit (Normal): 1 00:30:14.793 Atomic Write Unit (PFail): 1 00:30:14.793 Atomic Compare & Write Unit: 1 00:30:14.793 Fused Compare & Write: Not Supported 00:30:14.793 Scatter-Gather List 00:30:14.793 SGL Command Set: Supported 00:30:14.793 SGL Keyed: Not Supported 00:30:14.793 SGL Bit Bucket Descriptor: Not Supported 00:30:14.793 SGL Metadata Pointer: Not Supported 00:30:14.793 Oversized SGL: Not Supported 00:30:14.793 SGL Metadata Address: Not Supported 00:30:14.793 SGL Offset: Not Supported 00:30:14.793 Transport SGL Data Block: Not Supported 00:30:14.793 Replay Protected Memory Block: Not Supported 00:30:14.793 00:30:14.793 Firmware Slot Information 00:30:14.793 ========================= 00:30:14.793 Active slot: 1 00:30:14.793 Slot 1 Firmware Revision: 1.0 00:30:14.793 00:30:14.793 00:30:14.793 Commands Supported and Effects 00:30:14.793 ============================== 00:30:14.793 Admin Commands 00:30:14.793 -------------- 00:30:14.793 Delete I/O Submission Queue (00h): Supported 00:30:14.793 Create I/O Submission Queue (01h): Supported 00:30:14.793 Get Log Page (02h): Supported 00:30:14.793 Delete I/O Completion Queue (04h): Supported 00:30:14.793 Create I/O Completion Queue (05h): Supported 00:30:14.793 Identify (06h): Supported 00:30:14.793 Abort (08h): Supported 00:30:14.793 Set Features (09h): Supported 00:30:14.793 Get Features (0Ah): Supported 00:30:14.793 Asynchronous Event Request (0Ch): Supported 00:30:14.793 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:14.793 Directive Send (19h): Supported 00:30:14.793 Directive Receive (1Ah): Supported 00:30:14.793 Virtualization Management (1Ch): Supported 00:30:14.793 Doorbell Buffer Config (7Ch): Supported 00:30:14.793 Format NVM (80h): Supported LBA-Change 00:30:14.793 I/O Commands 00:30:14.793 ------------ 00:30:14.793 Flush (00h): Supported LBA-Change 00:30:14.793 Write (01h): Supported LBA-Change 00:30:14.793 Read (02h): Supported 00:30:14.793 Compare (05h): Supported 00:30:14.793 Write Zeroes (08h): Supported LBA-Change 00:30:14.793 Dataset Management (09h): Supported LBA-Change 00:30:14.793 Unknown (0Ch): Supported 00:30:14.793 Unknown (12h): Supported 00:30:14.793 Copy (19h): Supported LBA-Change 00:30:14.793 Unknown (1Dh): Supported LBA-Change 00:30:14.793 00:30:14.793 Error Log 00:30:14.793 ========= 00:30:14.793 00:30:14.793 Arbitration 00:30:14.793 =========== 00:30:14.793 Arbitration Burst: no limit 00:30:14.793 00:30:14.793 Power Management 00:30:14.793 ================ 00:30:14.793 Number of Power States: 1 00:30:14.794 Current Power State: Power State #0 00:30:14.794 Power State #0: 00:30:14.794 Max Power: 25.00 W 00:30:14.794 Non-Operational State: Operational 00:30:14.794 Entry Latency: 16 microseconds 00:30:14.794 Exit Latency: 4 microseconds 00:30:14.794 Relative Read Throughput: 0 00:30:14.794 Relative Read Latency: 0 00:30:14.794 Relative Write Throughput: 0 00:30:14.794 Relative Write Latency: 0 00:30:15.053 Idle Power: Not Reported 00:30:15.053 Active Power: Not Reported 00:30:15.053 Non-Operational Permissive Mode: Not Supported 00:30:15.053 00:30:15.053 Health Information 00:30:15.053 ================== 00:30:15.053 Critical Warnings: 00:30:15.053 Available Spare Space: OK 00:30:15.053 Temperature: OK 00:30:15.053 Device Reliability: OK 00:30:15.053 Read Only: No 00:30:15.053 Volatile Memory Backup: OK 00:30:15.053 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.053 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.053 Available Spare: 0% 00:30:15.053 Available Spare Threshold: 0% 00:30:15.053 Life Percentage Used: 0% 00:30:15.053 Data Units Read: 760 00:30:15.053 Data Units Written: 688 00:30:15.053 Host Read Commands: 33673 00:30:15.053 Host Write Commands: 33459 00:30:15.053 Controller Busy Time: 0 minutes 00:30:15.053 Power Cycles: 0 00:30:15.053 Power On Hours: 0 hours 00:30:15.053 Unsafe Shutdowns: 0 00:30:15.053 Unrecoverable Media Errors: 0 00:30:15.053 Lifetime Error Log Entries: 0 00:30:15.053 Warning Temperature Time: 0 minutes 00:30:15.053 Critical Temperature Time: 0 minutes 00:30:15.053 00:30:15.053 Number of Queues 00:30:15.053 ================ 00:30:15.053 Number of I/O Submission Queues: 64 00:30:15.053 Number of I/O Completion Queues: 64 00:30:15.053 00:30:15.053 ZNS Specific Controller Data 00:30:15.053 ============================ 00:30:15.053 Zone Append Size Limit: 0 00:30:15.053 00:30:15.053 00:30:15.053 Active Namespaces 00:30:15.053 ================= 00:30:15.053 Namespace ID:1 00:30:15.053 Error Recovery Timeout: Unlimited 00:30:15.053 Command Set Identifier: NVM (00h) 00:30:15.053 Deallocate: Supported 00:30:15.053 Deallocated/Unwritten Error: Supported 00:30:15.053 Deallocated Read Value: All 0x00 00:30:15.053 Deallocate in Write Zeroes: Not Supported 00:30:15.053 Deallocated Guard Field: 0xFFFF 00:30:15.053 Flush: Supported 00:30:15.053 Reservation: Not Supported 00:30:15.053 Metadata Transferred as: Separate Metadata Buffer 00:30:15.053 Namespace Sharing Capabilities: Private 00:30:15.053 Size (in LBAs): 1548666 (5GiB) 00:30:15.053 Capacity (in LBAs): 1548666 (5GiB) 00:30:15.053 Utilization (in LBAs): 1548666 (5GiB) 00:30:15.053 Thin Provisioning: Not Supported 00:30:15.053 Per-NS Atomic Units: No 00:30:15.053 Maximum Single Source Range Length: 128 00:30:15.053 Maximum Copy Length: 128 00:30:15.053 Maximum Source Range Count: 128 00:30:15.053 NGUID/EUI64 Never Reused: No 00:30:15.053 Namespace Write Protected: No 00:30:15.053 Number of LBA Formats: 8 00:30:15.053 Current LBA Format: LBA Format #07 00:30:15.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.054 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.054 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.054 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.054 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.054 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.054 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.054 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.054 00:30:15.054 NVM Specific Namespace Data 00:30:15.054 =========================== 00:30:15.054 Logical Block Storage Tag Mask: 0 00:30:15.054 Protection Information Capabilities: 00:30:15.054 16b Guard Protection Information Storage Tag Support: No 00:30:15.054 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.054 Storage Tag Check Read Support: No 00:30:15.054 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.054 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.054 05:41:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:30:15.314 ===================================================== 00:30:15.314 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:15.314 ===================================================== 00:30:15.314 Controller Capabilities/Features 00:30:15.314 ================================ 00:30:15.314 Vendor ID: 1b36 00:30:15.314 Subsystem Vendor ID: 1af4 00:30:15.314 Serial Number: 12341 00:30:15.314 Model Number: QEMU NVMe Ctrl 00:30:15.314 Firmware Version: 8.0.0 00:30:15.314 Recommended Arb Burst: 6 00:30:15.314 IEEE OUI Identifier: 00 54 52 00:30:15.314 Multi-path I/O 00:30:15.314 May have multiple subsystem ports: No 00:30:15.314 May have multiple controllers: No 00:30:15.314 Associated with SR-IOV VF: No 00:30:15.314 Max Data Transfer Size: 524288 00:30:15.314 Max Number of Namespaces: 256 00:30:15.314 Max Number of I/O Queues: 64 00:30:15.314 NVMe Specification Version (VS): 1.4 00:30:15.314 NVMe Specification Version (Identify): 1.4 00:30:15.314 Maximum Queue Entries: 2048 00:30:15.314 Contiguous Queues Required: Yes 00:30:15.314 Arbitration Mechanisms Supported 00:30:15.314 Weighted Round Robin: Not Supported 00:30:15.314 Vendor Specific: Not Supported 00:30:15.314 Reset Timeout: 7500 ms 00:30:15.314 Doorbell Stride: 4 bytes 00:30:15.314 NVM Subsystem Reset: Not Supported 00:30:15.314 Command Sets Supported 00:30:15.314 NVM Command Set: Supported 00:30:15.314 Boot Partition: Not Supported 00:30:15.314 Memory Page Size Minimum: 4096 bytes 00:30:15.314 Memory Page Size Maximum: 65536 bytes 00:30:15.314 Persistent Memory Region: Not Supported 00:30:15.314 Optional Asynchronous Events Supported 00:30:15.314 Namespace Attribute Notices: Supported 00:30:15.314 Firmware Activation Notices: Not Supported 00:30:15.314 ANA Change Notices: Not Supported 00:30:15.314 PLE Aggregate Log Change Notices: Not Supported 00:30:15.314 LBA Status Info Alert Notices: Not Supported 00:30:15.314 EGE Aggregate Log Change Notices: Not Supported 00:30:15.314 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.314 Zone Descriptor Change Notices: Not Supported 00:30:15.314 Discovery Log Change Notices: Not Supported 00:30:15.314 Controller Attributes 00:30:15.314 128-bit Host Identifier: Not Supported 00:30:15.314 Non-Operational Permissive Mode: Not Supported 00:30:15.314 NVM Sets: Not Supported 00:30:15.314 Read Recovery Levels: Not Supported 00:30:15.314 Endurance Groups: Not Supported 00:30:15.314 Predictable Latency Mode: Not Supported 00:30:15.314 Traffic Based Keep ALive: Not Supported 00:30:15.314 Namespace Granularity: Not Supported 00:30:15.315 SQ Associations: Not Supported 00:30:15.315 UUID List: Not Supported 00:30:15.315 Multi-Domain Subsystem: Not Supported 00:30:15.315 Fixed Capacity Management: Not Supported 00:30:15.315 Variable Capacity Management: Not Supported 00:30:15.315 Delete Endurance Group: Not Supported 00:30:15.315 Delete NVM Set: Not Supported 00:30:15.315 Extended LBA Formats Supported: Supported 00:30:15.315 Flexible Data Placement Supported: Not Supported 00:30:15.315 00:30:15.315 Controller Memory Buffer Support 00:30:15.315 ================================ 00:30:15.315 Supported: No 00:30:15.315 00:30:15.315 Persistent Memory Region Support 00:30:15.315 ================================ 00:30:15.315 Supported: No 00:30:15.315 00:30:15.315 Admin Command Set Attributes 00:30:15.315 ============================ 00:30:15.315 Security Send/Receive: Not Supported 00:30:15.315 Format NVM: Supported 00:30:15.315 Firmware Activate/Download: Not Supported 00:30:15.315 Namespace Management: Supported 00:30:15.315 Device Self-Test: Not Supported 00:30:15.315 Directives: Supported 00:30:15.315 NVMe-MI: Not Supported 00:30:15.315 Virtualization Management: Not Supported 00:30:15.315 Doorbell Buffer Config: Supported 00:30:15.315 Get LBA Status Capability: Not Supported 00:30:15.315 Command & Feature Lockdown Capability: Not Supported 00:30:15.315 Abort Command Limit: 4 00:30:15.315 Async Event Request Limit: 4 00:30:15.315 Number of Firmware Slots: N/A 00:30:15.315 Firmware Slot 1 Read-Only: N/A 00:30:15.315 Firmware Activation Without Reset: N/A 00:30:15.315 Multiple Update Detection Support: N/A 00:30:15.315 Firmware Update Granularity: No Information Provided 00:30:15.315 Per-Namespace SMART Log: Yes 00:30:15.315 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.315 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:30:15.315 Command Effects Log Page: Supported 00:30:15.315 Get Log Page Extended Data: Supported 00:30:15.315 Telemetry Log Pages: Not Supported 00:30:15.315 Persistent Event Log Pages: Not Supported 00:30:15.315 Supported Log Pages Log Page: May Support 00:30:15.315 Commands Supported & Effects Log Page: Not Supported 00:30:15.315 Feature Identifiers & Effects Log Page:May Support 00:30:15.315 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.315 Data Area 4 for Telemetry Log: Not Supported 00:30:15.315 Error Log Page Entries Supported: 1 00:30:15.315 Keep Alive: Not Supported 00:30:15.315 00:30:15.315 NVM Command Set Attributes 00:30:15.315 ========================== 00:30:15.315 Submission Queue Entry Size 00:30:15.315 Max: 64 00:30:15.315 Min: 64 00:30:15.315 Completion Queue Entry Size 00:30:15.315 Max: 16 00:30:15.315 Min: 16 00:30:15.315 Number of Namespaces: 256 00:30:15.315 Compare Command: Supported 00:30:15.315 Write Uncorrectable Command: Not Supported 00:30:15.315 Dataset Management Command: Supported 00:30:15.315 Write Zeroes Command: Supported 00:30:15.315 Set Features Save Field: Supported 00:30:15.315 Reservations: Not Supported 00:30:15.315 Timestamp: Supported 00:30:15.315 Copy: Supported 00:30:15.315 Volatile Write Cache: Present 00:30:15.315 Atomic Write Unit (Normal): 1 00:30:15.315 Atomic Write Unit (PFail): 1 00:30:15.315 Atomic Compare & Write Unit: 1 00:30:15.315 Fused Compare & Write: Not Supported 00:30:15.315 Scatter-Gather List 00:30:15.315 SGL Command Set: Supported 00:30:15.315 SGL Keyed: Not Supported 00:30:15.315 SGL Bit Bucket Descriptor: Not Supported 00:30:15.315 SGL Metadata Pointer: Not Supported 00:30:15.315 Oversized SGL: Not Supported 00:30:15.315 SGL Metadata Address: Not Supported 00:30:15.315 SGL Offset: Not Supported 00:30:15.315 Transport SGL Data Block: Not Supported 00:30:15.315 Replay Protected Memory Block: Not Supported 00:30:15.315 00:30:15.315 Firmware Slot Information 00:30:15.315 ========================= 00:30:15.315 Active slot: 1 00:30:15.315 Slot 1 Firmware Revision: 1.0 00:30:15.315 00:30:15.315 00:30:15.315 Commands Supported and Effects 00:30:15.315 ============================== 00:30:15.315 Admin Commands 00:30:15.315 -------------- 00:30:15.315 Delete I/O Submission Queue (00h): Supported 00:30:15.315 Create I/O Submission Queue (01h): Supported 00:30:15.315 Get Log Page (02h): Supported 00:30:15.315 Delete I/O Completion Queue (04h): Supported 00:30:15.315 Create I/O Completion Queue (05h): Supported 00:30:15.315 Identify (06h): Supported 00:30:15.315 Abort (08h): Supported 00:30:15.315 Set Features (09h): Supported 00:30:15.315 Get Features (0Ah): Supported 00:30:15.315 Asynchronous Event Request (0Ch): Supported 00:30:15.315 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.315 Directive Send (19h): Supported 00:30:15.315 Directive Receive (1Ah): Supported 00:30:15.315 Virtualization Management (1Ch): Supported 00:30:15.315 Doorbell Buffer Config (7Ch): Supported 00:30:15.315 Format NVM (80h): Supported LBA-Change 00:30:15.315 I/O Commands 00:30:15.315 ------------ 00:30:15.315 Flush (00h): Supported LBA-Change 00:30:15.315 Write (01h): Supported LBA-Change 00:30:15.315 Read (02h): Supported 00:30:15.315 Compare (05h): Supported 00:30:15.315 Write Zeroes (08h): Supported LBA-Change 00:30:15.315 Dataset Management (09h): Supported LBA-Change 00:30:15.315 Unknown (0Ch): Supported 00:30:15.315 Unknown (12h): Supported 00:30:15.315 Copy (19h): Supported LBA-Change 00:30:15.315 Unknown (1Dh): Supported LBA-Change 00:30:15.315 00:30:15.315 Error Log 00:30:15.315 ========= 00:30:15.315 00:30:15.315 Arbitration 00:30:15.315 =========== 00:30:15.315 Arbitration Burst: no limit 00:30:15.315 00:30:15.315 Power Management 00:30:15.315 ================ 00:30:15.315 Number of Power States: 1 00:30:15.315 Current Power State: Power State #0 00:30:15.315 Power State #0: 00:30:15.315 Max Power: 25.00 W 00:30:15.315 Non-Operational State: Operational 00:30:15.315 Entry Latency: 16 microseconds 00:30:15.315 Exit Latency: 4 microseconds 00:30:15.315 Relative Read Throughput: 0 00:30:15.315 Relative Read Latency: 0 00:30:15.315 Relative Write Throughput: 0 00:30:15.315 Relative Write Latency: 0 00:30:15.315 Idle Power: Not Reported 00:30:15.315 Active Power: Not Reported 00:30:15.315 Non-Operational Permissive Mode: Not Supported 00:30:15.315 00:30:15.315 Health Information 00:30:15.315 ================== 00:30:15.315 Critical Warnings: 00:30:15.315 Available Spare Space: OK 00:30:15.315 Temperature: OK 00:30:15.315 Device Reliability: OK 00:30:15.315 Read Only: No 00:30:15.315 Volatile Memory Backup: OK 00:30:15.315 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.315 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.315 Available Spare: 0% 00:30:15.315 Available Spare Threshold: 0% 00:30:15.315 Life Percentage Used: 0% 00:30:15.315 Data Units Read: 1187 00:30:15.315 Data Units Written: 1048 00:30:15.315 Host Read Commands: 50359 00:30:15.315 Host Write Commands: 49056 00:30:15.315 Controller Busy Time: 0 minutes 00:30:15.315 Power Cycles: 0 00:30:15.315 Power On Hours: 0 hours 00:30:15.315 Unsafe Shutdowns: 0 00:30:15.315 Unrecoverable Media Errors: 0 00:30:15.315 Lifetime Error Log Entries: 0 00:30:15.315 Warning Temperature Time: 0 minutes 00:30:15.315 Critical Temperature Time: 0 minutes 00:30:15.315 00:30:15.315 Number of Queues 00:30:15.315 ================ 00:30:15.315 Number of I/O Submission Queues: 64 00:30:15.315 Number of I/O Completion Queues: 64 00:30:15.315 00:30:15.315 ZNS Specific Controller Data 00:30:15.315 ============================ 00:30:15.315 Zone Append Size Limit: 0 00:30:15.315 00:30:15.315 00:30:15.315 Active Namespaces 00:30:15.315 ================= 00:30:15.315 Namespace ID:1 00:30:15.315 Error Recovery Timeout: Unlimited 00:30:15.315 Command Set Identifier: NVM (00h) 00:30:15.315 Deallocate: Supported 00:30:15.315 Deallocated/Unwritten Error: Supported 00:30:15.315 Deallocated Read Value: All 0x00 00:30:15.315 Deallocate in Write Zeroes: Not Supported 00:30:15.315 Deallocated Guard Field: 0xFFFF 00:30:15.315 Flush: Supported 00:30:15.315 Reservation: Not Supported 00:30:15.315 Namespace Sharing Capabilities: Private 00:30:15.315 Size (in LBAs): 1310720 (5GiB) 00:30:15.315 Capacity (in LBAs): 1310720 (5GiB) 00:30:15.315 Utilization (in LBAs): 1310720 (5GiB) 00:30:15.315 Thin Provisioning: Not Supported 00:30:15.315 Per-NS Atomic Units: No 00:30:15.315 Maximum Single Source Range Length: 128 00:30:15.315 Maximum Copy Length: 128 00:30:15.316 Maximum Source Range Count: 128 00:30:15.316 NGUID/EUI64 Never Reused: No 00:30:15.316 Namespace Write Protected: No 00:30:15.316 Number of LBA Formats: 8 00:30:15.316 Current LBA Format: LBA Format #04 00:30:15.316 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.316 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.316 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.316 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.316 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.316 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.316 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.316 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.316 00:30:15.316 NVM Specific Namespace Data 00:30:15.316 =========================== 00:30:15.316 Logical Block Storage Tag Mask: 0 00:30:15.316 Protection Information Capabilities: 00:30:15.316 16b Guard Protection Information Storage Tag Support: No 00:30:15.316 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.316 Storage Tag Check Read Support: No 00:30:15.316 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.316 05:41:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.316 05:41:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:30:15.577 ===================================================== 00:30:15.577 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:15.577 ===================================================== 00:30:15.577 Controller Capabilities/Features 00:30:15.577 ================================ 00:30:15.577 Vendor ID: 1b36 00:30:15.577 Subsystem Vendor ID: 1af4 00:30:15.577 Serial Number: 12342 00:30:15.577 Model Number: QEMU NVMe Ctrl 00:30:15.577 Firmware Version: 8.0.0 00:30:15.577 Recommended Arb Burst: 6 00:30:15.577 IEEE OUI Identifier: 00 54 52 00:30:15.577 Multi-path I/O 00:30:15.577 May have multiple subsystem ports: No 00:30:15.577 May have multiple controllers: No 00:30:15.577 Associated with SR-IOV VF: No 00:30:15.577 Max Data Transfer Size: 524288 00:30:15.577 Max Number of Namespaces: 256 00:30:15.577 Max Number of I/O Queues: 64 00:30:15.577 NVMe Specification Version (VS): 1.4 00:30:15.577 NVMe Specification Version (Identify): 1.4 00:30:15.577 Maximum Queue Entries: 2048 00:30:15.577 Contiguous Queues Required: Yes 00:30:15.577 Arbitration Mechanisms Supported 00:30:15.577 Weighted Round Robin: Not Supported 00:30:15.577 Vendor Specific: Not Supported 00:30:15.577 Reset Timeout: 7500 ms 00:30:15.577 Doorbell Stride: 4 bytes 00:30:15.577 NVM Subsystem Reset: Not Supported 00:30:15.577 Command Sets Supported 00:30:15.577 NVM Command Set: Supported 00:30:15.577 Boot Partition: Not Supported 00:30:15.577 Memory Page Size Minimum: 4096 bytes 00:30:15.577 Memory Page Size Maximum: 65536 bytes 00:30:15.577 Persistent Memory Region: Not Supported 00:30:15.577 Optional Asynchronous Events Supported 00:30:15.577 Namespace Attribute Notices: Supported 00:30:15.577 Firmware Activation Notices: Not Supported 00:30:15.577 ANA Change Notices: Not Supported 00:30:15.577 PLE Aggregate Log Change Notices: Not Supported 00:30:15.577 LBA Status Info Alert Notices: Not Supported 00:30:15.577 EGE Aggregate Log Change Notices: Not Supported 00:30:15.577 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.577 Zone Descriptor Change Notices: Not Supported 00:30:15.577 Discovery Log Change Notices: Not Supported 00:30:15.577 Controller Attributes 00:30:15.577 128-bit Host Identifier: Not Supported 00:30:15.577 Non-Operational Permissive Mode: Not Supported 00:30:15.577 NVM Sets: Not Supported 00:30:15.577 Read Recovery Levels: Not Supported 00:30:15.577 Endurance Groups: Not Supported 00:30:15.577 Predictable Latency Mode: Not Supported 00:30:15.577 Traffic Based Keep ALive: Not Supported 00:30:15.577 Namespace Granularity: Not Supported 00:30:15.577 SQ Associations: Not Supported 00:30:15.577 UUID List: Not Supported 00:30:15.577 Multi-Domain Subsystem: Not Supported 00:30:15.577 Fixed Capacity Management: Not Supported 00:30:15.577 Variable Capacity Management: Not Supported 00:30:15.577 Delete Endurance Group: Not Supported 00:30:15.577 Delete NVM Set: Not Supported 00:30:15.577 Extended LBA Formats Supported: Supported 00:30:15.577 Flexible Data Placement Supported: Not Supported 00:30:15.577 00:30:15.577 Controller Memory Buffer Support 00:30:15.577 ================================ 00:30:15.577 Supported: No 00:30:15.577 00:30:15.577 Persistent Memory Region Support 00:30:15.577 ================================ 00:30:15.577 Supported: No 00:30:15.577 00:30:15.577 Admin Command Set Attributes 00:30:15.577 ============================ 00:30:15.577 Security Send/Receive: Not Supported 00:30:15.577 Format NVM: Supported 00:30:15.577 Firmware Activate/Download: Not Supported 00:30:15.577 Namespace Management: Supported 00:30:15.577 Device Self-Test: Not Supported 00:30:15.577 Directives: Supported 00:30:15.577 NVMe-MI: Not Supported 00:30:15.577 Virtualization Management: Not Supported 00:30:15.577 Doorbell Buffer Config: Supported 00:30:15.577 Get LBA Status Capability: Not Supported 00:30:15.577 Command & Feature Lockdown Capability: Not Supported 00:30:15.577 Abort Command Limit: 4 00:30:15.577 Async Event Request Limit: 4 00:30:15.577 Number of Firmware Slots: N/A 00:30:15.577 Firmware Slot 1 Read-Only: N/A 00:30:15.577 Firmware Activation Without Reset: N/A 00:30:15.577 Multiple Update Detection Support: N/A 00:30:15.577 Firmware Update Granularity: No Information Provided 00:30:15.577 Per-Namespace SMART Log: Yes 00:30:15.577 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.577 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:30:15.577 Command Effects Log Page: Supported 00:30:15.577 Get Log Page Extended Data: Supported 00:30:15.577 Telemetry Log Pages: Not Supported 00:30:15.577 Persistent Event Log Pages: Not Supported 00:30:15.577 Supported Log Pages Log Page: May Support 00:30:15.577 Commands Supported & Effects Log Page: Not Supported 00:30:15.577 Feature Identifiers & Effects Log Page:May Support 00:30:15.577 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.577 Data Area 4 for Telemetry Log: Not Supported 00:30:15.577 Error Log Page Entries Supported: 1 00:30:15.577 Keep Alive: Not Supported 00:30:15.577 00:30:15.577 NVM Command Set Attributes 00:30:15.577 ========================== 00:30:15.577 Submission Queue Entry Size 00:30:15.577 Max: 64 00:30:15.577 Min: 64 00:30:15.577 Completion Queue Entry Size 00:30:15.577 Max: 16 00:30:15.577 Min: 16 00:30:15.577 Number of Namespaces: 256 00:30:15.577 Compare Command: Supported 00:30:15.577 Write Uncorrectable Command: Not Supported 00:30:15.577 Dataset Management Command: Supported 00:30:15.577 Write Zeroes Command: Supported 00:30:15.577 Set Features Save Field: Supported 00:30:15.577 Reservations: Not Supported 00:30:15.577 Timestamp: Supported 00:30:15.577 Copy: Supported 00:30:15.577 Volatile Write Cache: Present 00:30:15.577 Atomic Write Unit (Normal): 1 00:30:15.577 Atomic Write Unit (PFail): 1 00:30:15.577 Atomic Compare & Write Unit: 1 00:30:15.577 Fused Compare & Write: Not Supported 00:30:15.577 Scatter-Gather List 00:30:15.577 SGL Command Set: Supported 00:30:15.577 SGL Keyed: Not Supported 00:30:15.577 SGL Bit Bucket Descriptor: Not Supported 00:30:15.577 SGL Metadata Pointer: Not Supported 00:30:15.577 Oversized SGL: Not Supported 00:30:15.577 SGL Metadata Address: Not Supported 00:30:15.577 SGL Offset: Not Supported 00:30:15.577 Transport SGL Data Block: Not Supported 00:30:15.577 Replay Protected Memory Block: Not Supported 00:30:15.577 00:30:15.577 Firmware Slot Information 00:30:15.577 ========================= 00:30:15.577 Active slot: 1 00:30:15.577 Slot 1 Firmware Revision: 1.0 00:30:15.577 00:30:15.577 00:30:15.577 Commands Supported and Effects 00:30:15.577 ============================== 00:30:15.577 Admin Commands 00:30:15.577 -------------- 00:30:15.577 Delete I/O Submission Queue (00h): Supported 00:30:15.577 Create I/O Submission Queue (01h): Supported 00:30:15.577 Get Log Page (02h): Supported 00:30:15.577 Delete I/O Completion Queue (04h): Supported 00:30:15.577 Create I/O Completion Queue (05h): Supported 00:30:15.577 Identify (06h): Supported 00:30:15.577 Abort (08h): Supported 00:30:15.577 Set Features (09h): Supported 00:30:15.577 Get Features (0Ah): Supported 00:30:15.577 Asynchronous Event Request (0Ch): Supported 00:30:15.577 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.577 Directive Send (19h): Supported 00:30:15.577 Directive Receive (1Ah): Supported 00:30:15.577 Virtualization Management (1Ch): Supported 00:30:15.577 Doorbell Buffer Config (7Ch): Supported 00:30:15.577 Format NVM (80h): Supported LBA-Change 00:30:15.577 I/O Commands 00:30:15.577 ------------ 00:30:15.577 Flush (00h): Supported LBA-Change 00:30:15.577 Write (01h): Supported LBA-Change 00:30:15.577 Read (02h): Supported 00:30:15.577 Compare (05h): Supported 00:30:15.577 Write Zeroes (08h): Supported LBA-Change 00:30:15.577 Dataset Management (09h): Supported LBA-Change 00:30:15.577 Unknown (0Ch): Supported 00:30:15.577 Unknown (12h): Supported 00:30:15.577 Copy (19h): Supported LBA-Change 00:30:15.577 Unknown (1Dh): Supported LBA-Change 00:30:15.577 00:30:15.577 Error Log 00:30:15.577 ========= 00:30:15.577 00:30:15.577 Arbitration 00:30:15.577 =========== 00:30:15.577 Arbitration Burst: no limit 00:30:15.577 00:30:15.577 Power Management 00:30:15.577 ================ 00:30:15.577 Number of Power States: 1 00:30:15.577 Current Power State: Power State #0 00:30:15.577 Power State #0: 00:30:15.577 Max Power: 25.00 W 00:30:15.577 Non-Operational State: Operational 00:30:15.578 Entry Latency: 16 microseconds 00:30:15.578 Exit Latency: 4 microseconds 00:30:15.578 Relative Read Throughput: 0 00:30:15.578 Relative Read Latency: 0 00:30:15.578 Relative Write Throughput: 0 00:30:15.578 Relative Write Latency: 0 00:30:15.578 Idle Power: Not Reported 00:30:15.578 Active Power: Not Reported 00:30:15.578 Non-Operational Permissive Mode: Not Supported 00:30:15.578 00:30:15.578 Health Information 00:30:15.578 ================== 00:30:15.578 Critical Warnings: 00:30:15.578 Available Spare Space: OK 00:30:15.578 Temperature: OK 00:30:15.578 Device Reliability: OK 00:30:15.578 Read Only: No 00:30:15.578 Volatile Memory Backup: OK 00:30:15.578 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.578 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.578 Available Spare: 0% 00:30:15.578 Available Spare Threshold: 0% 00:30:15.578 Life Percentage Used: 0% 00:30:15.578 Data Units Read: 2363 00:30:15.578 Data Units Written: 2151 00:30:15.578 Host Read Commands: 102584 00:30:15.578 Host Write Commands: 100854 00:30:15.578 Controller Busy Time: 0 minutes 00:30:15.578 Power Cycles: 0 00:30:15.578 Power On Hours: 0 hours 00:30:15.578 Unsafe Shutdowns: 0 00:30:15.578 Unrecoverable Media Errors: 0 00:30:15.578 Lifetime Error Log Entries: 0 00:30:15.578 Warning Temperature Time: 0 minutes 00:30:15.578 Critical Temperature Time: 0 minutes 00:30:15.578 00:30:15.578 Number of Queues 00:30:15.578 ================ 00:30:15.578 Number of I/O Submission Queues: 64 00:30:15.578 Number of I/O Completion Queues: 64 00:30:15.578 00:30:15.578 ZNS Specific Controller Data 00:30:15.578 ============================ 00:30:15.578 Zone Append Size Limit: 0 00:30:15.578 00:30:15.578 00:30:15.578 Active Namespaces 00:30:15.578 ================= 00:30:15.578 Namespace ID:1 00:30:15.578 Error Recovery Timeout: Unlimited 00:30:15.578 Command Set Identifier: NVM (00h) 00:30:15.578 Deallocate: Supported 00:30:15.578 Deallocated/Unwritten Error: Supported 00:30:15.578 Deallocated Read Value: All 0x00 00:30:15.578 Deallocate in Write Zeroes: Not Supported 00:30:15.578 Deallocated Guard Field: 0xFFFF 00:30:15.578 Flush: Supported 00:30:15.578 Reservation: Not Supported 00:30:15.578 Namespace Sharing Capabilities: Private 00:30:15.578 Size (in LBAs): 1048576 (4GiB) 00:30:15.578 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.578 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.578 Thin Provisioning: Not Supported 00:30:15.578 Per-NS Atomic Units: No 00:30:15.578 Maximum Single Source Range Length: 128 00:30:15.578 Maximum Copy Length: 128 00:30:15.578 Maximum Source Range Count: 128 00:30:15.578 NGUID/EUI64 Never Reused: No 00:30:15.578 Namespace Write Protected: No 00:30:15.578 Number of LBA Formats: 8 00:30:15.578 Current LBA Format: LBA Format #04 00:30:15.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.578 00:30:15.578 NVM Specific Namespace Data 00:30:15.578 =========================== 00:30:15.578 Logical Block Storage Tag Mask: 0 00:30:15.578 Protection Information Capabilities: 00:30:15.578 16b Guard Protection Information Storage Tag Support: No 00:30:15.578 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.578 Storage Tag Check Read Support: No 00:30:15.578 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Namespace ID:2 00:30:15.578 Error Recovery Timeout: Unlimited 00:30:15.578 Command Set Identifier: NVM (00h) 00:30:15.578 Deallocate: Supported 00:30:15.578 Deallocated/Unwritten Error: Supported 00:30:15.578 Deallocated Read Value: All 0x00 00:30:15.578 Deallocate in Write Zeroes: Not Supported 00:30:15.578 Deallocated Guard Field: 0xFFFF 00:30:15.578 Flush: Supported 00:30:15.578 Reservation: Not Supported 00:30:15.578 Namespace Sharing Capabilities: Private 00:30:15.578 Size (in LBAs): 1048576 (4GiB) 00:30:15.578 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.578 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.578 Thin Provisioning: Not Supported 00:30:15.578 Per-NS Atomic Units: No 00:30:15.578 Maximum Single Source Range Length: 128 00:30:15.578 Maximum Copy Length: 128 00:30:15.578 Maximum Source Range Count: 128 00:30:15.578 NGUID/EUI64 Never Reused: No 00:30:15.578 Namespace Write Protected: No 00:30:15.578 Number of LBA Formats: 8 00:30:15.578 Current LBA Format: LBA Format #04 00:30:15.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.578 00:30:15.578 NVM Specific Namespace Data 00:30:15.578 =========================== 00:30:15.578 Logical Block Storage Tag Mask: 0 00:30:15.578 Protection Information Capabilities: 00:30:15.578 16b Guard Protection Information Storage Tag Support: No 00:30:15.578 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.578 Storage Tag Check Read Support: No 00:30:15.578 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.578 Namespace ID:3 00:30:15.578 Error Recovery Timeout: Unlimited 00:30:15.578 Command Set Identifier: NVM (00h) 00:30:15.578 Deallocate: Supported 00:30:15.578 Deallocated/Unwritten Error: Supported 00:30:15.578 Deallocated Read Value: All 0x00 00:30:15.578 Deallocate in Write Zeroes: Not Supported 00:30:15.578 Deallocated Guard Field: 0xFFFF 00:30:15.578 Flush: Supported 00:30:15.578 Reservation: Not Supported 00:30:15.578 Namespace Sharing Capabilities: Private 00:30:15.578 Size (in LBAs): 1048576 (4GiB) 00:30:15.578 Capacity (in LBAs): 1048576 (4GiB) 00:30:15.578 Utilization (in LBAs): 1048576 (4GiB) 00:30:15.578 Thin Provisioning: Not Supported 00:30:15.578 Per-NS Atomic Units: No 00:30:15.578 Maximum Single Source Range Length: 128 00:30:15.578 Maximum Copy Length: 128 00:30:15.578 Maximum Source Range Count: 128 00:30:15.578 NGUID/EUI64 Never Reused: No 00:30:15.578 Namespace Write Protected: No 00:30:15.578 Number of LBA Formats: 8 00:30:15.578 Current LBA Format: LBA Format #04 00:30:15.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.578 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.578 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.578 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.578 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.578 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.578 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.578 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.578 00:30:15.578 NVM Specific Namespace Data 00:30:15.578 =========================== 00:30:15.578 Logical Block Storage Tag Mask: 0 00:30:15.578 Protection Information Capabilities: 00:30:15.578 16b Guard Protection Information Storage Tag Support: No 00:30:15.578 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.578 Storage Tag Check Read Support: No 00:30:15.579 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.579 05:41:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:15.579 05:41:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:30:15.840 ===================================================== 00:30:15.840 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:15.840 ===================================================== 00:30:15.840 Controller Capabilities/Features 00:30:15.840 ================================ 00:30:15.840 Vendor ID: 1b36 00:30:15.840 Subsystem Vendor ID: 1af4 00:30:15.840 Serial Number: 12343 00:30:15.840 Model Number: QEMU NVMe Ctrl 00:30:15.840 Firmware Version: 8.0.0 00:30:15.840 Recommended Arb Burst: 6 00:30:15.840 IEEE OUI Identifier: 00 54 52 00:30:15.840 Multi-path I/O 00:30:15.840 May have multiple subsystem ports: No 00:30:15.840 May have multiple controllers: Yes 00:30:15.840 Associated with SR-IOV VF: No 00:30:15.840 Max Data Transfer Size: 524288 00:30:15.840 Max Number of Namespaces: 256 00:30:15.840 Max Number of I/O Queues: 64 00:30:15.840 NVMe Specification Version (VS): 1.4 00:30:15.840 NVMe Specification Version (Identify): 1.4 00:30:15.840 Maximum Queue Entries: 2048 00:30:15.840 Contiguous Queues Required: Yes 00:30:15.840 Arbitration Mechanisms Supported 00:30:15.840 Weighted Round Robin: Not Supported 00:30:15.840 Vendor Specific: Not Supported 00:30:15.840 Reset Timeout: 7500 ms 00:30:15.840 Doorbell Stride: 4 bytes 00:30:15.840 NVM Subsystem Reset: Not Supported 00:30:15.840 Command Sets Supported 00:30:15.840 NVM Command Set: Supported 00:30:15.840 Boot Partition: Not Supported 00:30:15.840 Memory Page Size Minimum: 4096 bytes 00:30:15.840 Memory Page Size Maximum: 65536 bytes 00:30:15.840 Persistent Memory Region: Not Supported 00:30:15.840 Optional Asynchronous Events Supported 00:30:15.840 Namespace Attribute Notices: Supported 00:30:15.840 Firmware Activation Notices: Not Supported 00:30:15.840 ANA Change Notices: Not Supported 00:30:15.840 PLE Aggregate Log Change Notices: Not Supported 00:30:15.840 LBA Status Info Alert Notices: Not Supported 00:30:15.840 EGE Aggregate Log Change Notices: Not Supported 00:30:15.840 Normal NVM Subsystem Shutdown event: Not Supported 00:30:15.840 Zone Descriptor Change Notices: Not Supported 00:30:15.840 Discovery Log Change Notices: Not Supported 00:30:15.840 Controller Attributes 00:30:15.840 128-bit Host Identifier: Not Supported 00:30:15.840 Non-Operational Permissive Mode: Not Supported 00:30:15.840 NVM Sets: Not Supported 00:30:15.840 Read Recovery Levels: Not Supported 00:30:15.840 Endurance Groups: Supported 00:30:15.840 Predictable Latency Mode: Not Supported 00:30:15.840 Traffic Based Keep ALive: Not Supported 00:30:15.840 Namespace Granularity: Not Supported 00:30:15.840 SQ Associations: Not Supported 00:30:15.840 UUID List: Not Supported 00:30:15.841 Multi-Domain Subsystem: Not Supported 00:30:15.841 Fixed Capacity Management: Not Supported 00:30:15.841 Variable Capacity Management: Not Supported 00:30:15.841 Delete Endurance Group: Not Supported 00:30:15.841 Delete NVM Set: Not Supported 00:30:15.841 Extended LBA Formats Supported: Supported 00:30:15.841 Flexible Data Placement Supported: Supported 00:30:15.841 00:30:15.841 Controller Memory Buffer Support 00:30:15.841 ================================ 00:30:15.841 Supported: No 00:30:15.841 00:30:15.841 Persistent Memory Region Support 00:30:15.841 ================================ 00:30:15.841 Supported: No 00:30:15.841 00:30:15.841 Admin Command Set Attributes 00:30:15.841 ============================ 00:30:15.841 Security Send/Receive: Not Supported 00:30:15.841 Format NVM: Supported 00:30:15.841 Firmware Activate/Download: Not Supported 00:30:15.841 Namespace Management: Supported 00:30:15.841 Device Self-Test: Not Supported 00:30:15.841 Directives: Supported 00:30:15.841 NVMe-MI: Not Supported 00:30:15.841 Virtualization Management: Not Supported 00:30:15.841 Doorbell Buffer Config: Supported 00:30:15.841 Get LBA Status Capability: Not Supported 00:30:15.841 Command & Feature Lockdown Capability: Not Supported 00:30:15.841 Abort Command Limit: 4 00:30:15.841 Async Event Request Limit: 4 00:30:15.841 Number of Firmware Slots: N/A 00:30:15.841 Firmware Slot 1 Read-Only: N/A 00:30:15.841 Firmware Activation Without Reset: N/A 00:30:15.841 Multiple Update Detection Support: N/A 00:30:15.841 Firmware Update Granularity: No Information Provided 00:30:15.841 Per-Namespace SMART Log: Yes 00:30:15.841 Asymmetric Namespace Access Log Page: Not Supported 00:30:15.841 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:30:15.841 Command Effects Log Page: Supported 00:30:15.841 Get Log Page Extended Data: Supported 00:30:15.841 Telemetry Log Pages: Not Supported 00:30:15.841 Persistent Event Log Pages: Not Supported 00:30:15.841 Supported Log Pages Log Page: May Support 00:30:15.841 Commands Supported & Effects Log Page: Not Supported 00:30:15.841 Feature Identifiers & Effects Log Page:May Support 00:30:15.841 NVMe-MI Commands & Effects Log Page: May Support 00:30:15.841 Data Area 4 for Telemetry Log: Not Supported 00:30:15.841 Error Log Page Entries Supported: 1 00:30:15.841 Keep Alive: Not Supported 00:30:15.841 00:30:15.841 NVM Command Set Attributes 00:30:15.841 ========================== 00:30:15.841 Submission Queue Entry Size 00:30:15.841 Max: 64 00:30:15.841 Min: 64 00:30:15.841 Completion Queue Entry Size 00:30:15.841 Max: 16 00:30:15.841 Min: 16 00:30:15.841 Number of Namespaces: 256 00:30:15.841 Compare Command: Supported 00:30:15.841 Write Uncorrectable Command: Not Supported 00:30:15.841 Dataset Management Command: Supported 00:30:15.841 Write Zeroes Command: Supported 00:30:15.841 Set Features Save Field: Supported 00:30:15.841 Reservations: Not Supported 00:30:15.841 Timestamp: Supported 00:30:15.841 Copy: Supported 00:30:15.841 Volatile Write Cache: Present 00:30:15.841 Atomic Write Unit (Normal): 1 00:30:15.841 Atomic Write Unit (PFail): 1 00:30:15.841 Atomic Compare & Write Unit: 1 00:30:15.841 Fused Compare & Write: Not Supported 00:30:15.841 Scatter-Gather List 00:30:15.841 SGL Command Set: Supported 00:30:15.841 SGL Keyed: Not Supported 00:30:15.841 SGL Bit Bucket Descriptor: Not Supported 00:30:15.841 SGL Metadata Pointer: Not Supported 00:30:15.841 Oversized SGL: Not Supported 00:30:15.841 SGL Metadata Address: Not Supported 00:30:15.841 SGL Offset: Not Supported 00:30:15.841 Transport SGL Data Block: Not Supported 00:30:15.841 Replay Protected Memory Block: Not Supported 00:30:15.841 00:30:15.841 Firmware Slot Information 00:30:15.841 ========================= 00:30:15.841 Active slot: 1 00:30:15.841 Slot 1 Firmware Revision: 1.0 00:30:15.841 00:30:15.841 00:30:15.841 Commands Supported and Effects 00:30:15.841 ============================== 00:30:15.841 Admin Commands 00:30:15.841 -------------- 00:30:15.841 Delete I/O Submission Queue (00h): Supported 00:30:15.841 Create I/O Submission Queue (01h): Supported 00:30:15.841 Get Log Page (02h): Supported 00:30:15.841 Delete I/O Completion Queue (04h): Supported 00:30:15.841 Create I/O Completion Queue (05h): Supported 00:30:15.841 Identify (06h): Supported 00:30:15.841 Abort (08h): Supported 00:30:15.841 Set Features (09h): Supported 00:30:15.841 Get Features (0Ah): Supported 00:30:15.841 Asynchronous Event Request (0Ch): Supported 00:30:15.841 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:15.841 Directive Send (19h): Supported 00:30:15.841 Directive Receive (1Ah): Supported 00:30:15.841 Virtualization Management (1Ch): Supported 00:30:15.841 Doorbell Buffer Config (7Ch): Supported 00:30:15.841 Format NVM (80h): Supported LBA-Change 00:30:15.841 I/O Commands 00:30:15.841 ------------ 00:30:15.841 Flush (00h): Supported LBA-Change 00:30:15.841 Write (01h): Supported LBA-Change 00:30:15.841 Read (02h): Supported 00:30:15.841 Compare (05h): Supported 00:30:15.841 Write Zeroes (08h): Supported LBA-Change 00:30:15.841 Dataset Management (09h): Supported LBA-Change 00:30:15.841 Unknown (0Ch): Supported 00:30:15.841 Unknown (12h): Supported 00:30:15.841 Copy (19h): Supported LBA-Change 00:30:15.841 Unknown (1Dh): Supported LBA-Change 00:30:15.841 00:30:15.841 Error Log 00:30:15.841 ========= 00:30:15.841 00:30:15.841 Arbitration 00:30:15.841 =========== 00:30:15.841 Arbitration Burst: no limit 00:30:15.841 00:30:15.841 Power Management 00:30:15.841 ================ 00:30:15.841 Number of Power States: 1 00:30:15.841 Current Power State: Power State #0 00:30:15.841 Power State #0: 00:30:15.841 Max Power: 25.00 W 00:30:15.841 Non-Operational State: Operational 00:30:15.841 Entry Latency: 16 microseconds 00:30:15.841 Exit Latency: 4 microseconds 00:30:15.841 Relative Read Throughput: 0 00:30:15.841 Relative Read Latency: 0 00:30:15.841 Relative Write Throughput: 0 00:30:15.841 Relative Write Latency: 0 00:30:15.841 Idle Power: Not Reported 00:30:15.841 Active Power: Not Reported 00:30:15.841 Non-Operational Permissive Mode: Not Supported 00:30:15.841 00:30:15.841 Health Information 00:30:15.841 ================== 00:30:15.841 Critical Warnings: 00:30:15.841 Available Spare Space: OK 00:30:15.841 Temperature: OK 00:30:15.841 Device Reliability: OK 00:30:15.841 Read Only: No 00:30:15.841 Volatile Memory Backup: OK 00:30:15.841 Current Temperature: 323 Kelvin (50 Celsius) 00:30:15.841 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:15.841 Available Spare: 0% 00:30:15.841 Available Spare Threshold: 0% 00:30:15.841 Life Percentage Used: 0% 00:30:15.841 Data Units Read: 845 00:30:15.841 Data Units Written: 774 00:30:15.841 Host Read Commands: 34760 00:30:15.841 Host Write Commands: 34183 00:30:15.841 Controller Busy Time: 0 minutes 00:30:15.841 Power Cycles: 0 00:30:15.841 Power On Hours: 0 hours 00:30:15.841 Unsafe Shutdowns: 0 00:30:15.841 Unrecoverable Media Errors: 0 00:30:15.841 Lifetime Error Log Entries: 0 00:30:15.841 Warning Temperature Time: 0 minutes 00:30:15.841 Critical Temperature Time: 0 minutes 00:30:15.841 00:30:15.841 Number of Queues 00:30:15.841 ================ 00:30:15.841 Number of I/O Submission Queues: 64 00:30:15.841 Number of I/O Completion Queues: 64 00:30:15.841 00:30:15.841 ZNS Specific Controller Data 00:30:15.841 ============================ 00:30:15.841 Zone Append Size Limit: 0 00:30:15.841 00:30:15.841 00:30:15.841 Active Namespaces 00:30:15.841 ================= 00:30:15.841 Namespace ID:1 00:30:15.841 Error Recovery Timeout: Unlimited 00:30:15.841 Command Set Identifier: NVM (00h) 00:30:15.841 Deallocate: Supported 00:30:15.841 Deallocated/Unwritten Error: Supported 00:30:15.841 Deallocated Read Value: All 0x00 00:30:15.841 Deallocate in Write Zeroes: Not Supported 00:30:15.841 Deallocated Guard Field: 0xFFFF 00:30:15.841 Flush: Supported 00:30:15.841 Reservation: Not Supported 00:30:15.841 Namespace Sharing Capabilities: Multiple Controllers 00:30:15.841 Size (in LBAs): 262144 (1GiB) 00:30:15.841 Capacity (in LBAs): 262144 (1GiB) 00:30:15.841 Utilization (in LBAs): 262144 (1GiB) 00:30:15.841 Thin Provisioning: Not Supported 00:30:15.841 Per-NS Atomic Units: No 00:30:15.841 Maximum Single Source Range Length: 128 00:30:15.841 Maximum Copy Length: 128 00:30:15.841 Maximum Source Range Count: 128 00:30:15.841 NGUID/EUI64 Never Reused: No 00:30:15.841 Namespace Write Protected: No 00:30:15.841 Endurance group ID: 1 00:30:15.842 Number of LBA Formats: 8 00:30:15.842 Current LBA Format: LBA Format #04 00:30:15.842 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:15.842 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:15.842 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:15.842 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:15.842 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:15.842 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:15.842 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:15.842 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:15.842 00:30:15.842 Get Feature FDP: 00:30:15.842 ================ 00:30:15.842 Enabled: Yes 00:30:15.842 FDP configuration index: 0 00:30:15.842 00:30:15.842 FDP configurations log page 00:30:15.842 =========================== 00:30:15.842 Number of FDP configurations: 1 00:30:15.842 Version: 0 00:30:15.842 Size: 112 00:30:15.842 FDP Configuration Descriptor: 0 00:30:15.842 Descriptor Size: 96 00:30:15.842 Reclaim Group Identifier format: 2 00:30:15.842 FDP Volatile Write Cache: Not Present 00:30:15.842 FDP Configuration: Valid 00:30:15.842 Vendor Specific Size: 0 00:30:15.842 Number of Reclaim Groups: 2 00:30:15.842 Number of Recalim Unit Handles: 8 00:30:15.842 Max Placement Identifiers: 128 00:30:15.842 Number of Namespaces Suppprted: 256 00:30:15.842 Reclaim unit Nominal Size: 6000000 bytes 00:30:15.842 Estimated Reclaim Unit Time Limit: Not Reported 00:30:15.842 RUH Desc #000: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #001: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #002: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #003: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #004: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #005: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #006: RUH Type: Initially Isolated 00:30:15.842 RUH Desc #007: RUH Type: Initially Isolated 00:30:15.842 00:30:15.842 FDP reclaim unit handle usage log page 00:30:15.842 ====================================== 00:30:15.842 Number of Reclaim Unit Handles: 8 00:30:15.842 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:30:15.842 RUH Usage Desc #001: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #002: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #003: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #004: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #005: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #006: RUH Attributes: Unused 00:30:15.842 RUH Usage Desc #007: RUH Attributes: Unused 00:30:15.842 00:30:15.842 FDP statistics log page 00:30:15.842 ======================= 00:30:15.842 Host bytes with metadata written: 486842368 00:30:15.842 Media bytes with metadata written: 486895616 00:30:15.842 Media bytes erased: 0 00:30:15.842 00:30:15.842 FDP events log page 00:30:15.842 =================== 00:30:15.842 Number of FDP events: 0 00:30:15.842 00:30:15.842 NVM Specific Namespace Data 00:30:15.842 =========================== 00:30:15.842 Logical Block Storage Tag Mask: 0 00:30:15.842 Protection Information Capabilities: 00:30:15.842 16b Guard Protection Information Storage Tag Support: No 00:30:15.842 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:30:15.842 Storage Tag Check Read Support: No 00:30:15.842 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:30:15.842 00:30:15.842 real 0m1.647s 00:30:15.842 user 0m0.604s 00:30:15.842 sys 0m0.823s 00:30:15.842 05:41:35 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:15.842 05:41:35 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:30:15.842 ************************************ 00:30:15.842 END TEST nvme_identify 00:30:15.842 ************************************ 00:30:16.102 05:41:35 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:16.102 05:41:35 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:16.102 05:41:35 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:16.102 05:41:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:16.102 ************************************ 00:30:16.102 START TEST nvme_perf 00:30:16.102 ************************************ 00:30:16.102 05:41:35 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:30:16.102 05:41:35 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:17.484 Initializing NVMe Controllers 00:30:17.484 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:17.484 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:17.484 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:17.484 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:17.484 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:17.484 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:17.484 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:17.484 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:17.484 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:17.484 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:17.484 Initialization complete. Launching workers. 00:30:17.484 ======================================================== 00:30:17.484 Latency(us) 00:30:17.484 Device Information : IOPS MiB/s Average min max 00:30:17.484 PCIE (0000:00:10.0) NSID 1 from core 0: 14933.32 175.00 8594.37 6940.76 46732.57 00:30:17.484 PCIE (0000:00:11.0) NSID 1 from core 0: 14933.32 175.00 8580.71 7034.28 44189.58 00:30:17.484 PCIE (0000:00:13.0) NSID 1 from core 0: 14933.32 175.00 8565.02 7039.01 42440.61 00:30:17.484 PCIE (0000:00:12.0) NSID 1 from core 0: 14933.32 175.00 8549.67 7017.53 40121.65 00:30:17.484 PCIE (0000:00:12.0) NSID 2 from core 0: 14933.32 175.00 8534.27 7043.44 37784.97 00:30:17.484 PCIE (0000:00:12.0) NSID 3 from core 0: 14997.14 175.75 8482.82 7039.55 30521.58 00:30:17.484 ======================================================== 00:30:17.484 Total : 89663.74 1050.75 8551.09 6940.76 46732.57 00:30:17.484 00:30:17.484 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:17.484 ================================================================================= 00:30:17.484 1.00000% : 7211.822us 00:30:17.484 10.00000% : 7440.769us 00:30:17.484 25.00000% : 7726.952us 00:30:17.484 50.00000% : 8070.372us 00:30:17.484 75.00000% : 8471.029us 00:30:17.484 90.00000% : 9501.289us 00:30:17.484 95.00000% : 10359.839us 00:30:17.484 98.00000% : 13450.620us 00:30:17.484 99.00000% : 15682.851us 00:30:17.485 99.50000% : 39836.730us 00:30:17.485 99.90000% : 46247.238us 00:30:17.485 99.99000% : 46705.132us 00:30:17.485 99.99900% : 46934.079us 00:30:17.485 99.99990% : 46934.079us 00:30:17.485 99.99999% : 46934.079us 00:30:17.485 00:30:17.485 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:17.485 ================================================================================= 00:30:17.485 1.00000% : 7297.677us 00:30:17.485 10.00000% : 7555.242us 00:30:17.485 25.00000% : 7726.952us 00:30:17.485 50.00000% : 8070.372us 00:30:17.485 75.00000% : 8471.029us 00:30:17.485 90.00000% : 9501.289us 00:30:17.485 95.00000% : 10188.129us 00:30:17.485 98.00000% : 13565.093us 00:30:17.485 99.00000% : 15224.957us 00:30:17.485 99.50000% : 37776.210us 00:30:17.485 99.90000% : 43957.771us 00:30:17.485 99.99000% : 44186.718us 00:30:17.485 99.99900% : 44415.665us 00:30:17.485 99.99990% : 44415.665us 00:30:17.485 99.99999% : 44415.665us 00:30:17.485 00:30:17.485 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:17.485 ================================================================================= 00:30:17.485 1.00000% : 7269.059us 00:30:17.485 10.00000% : 7555.242us 00:30:17.485 25.00000% : 7784.189us 00:30:17.485 50.00000% : 8070.372us 00:30:17.485 75.00000% : 8471.029us 00:30:17.485 90.00000% : 9501.289us 00:30:17.485 95.00000% : 10188.129us 00:30:17.485 98.00000% : 13679.567us 00:30:17.485 99.00000% : 14996.010us 00:30:17.485 99.50000% : 35715.689us 00:30:17.485 99.90000% : 42126.197us 00:30:17.485 99.99000% : 42584.091us 00:30:17.485 99.99900% : 42584.091us 00:30:17.485 99.99990% : 42584.091us 00:30:17.485 99.99999% : 42584.091us 00:30:17.485 00:30:17.485 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:17.485 ================================================================================= 00:30:17.485 1.00000% : 7297.677us 00:30:17.485 10.00000% : 7555.242us 00:30:17.485 25.00000% : 7784.189us 00:30:17.485 50.00000% : 8070.372us 00:30:17.485 75.00000% : 8413.792us 00:30:17.485 90.00000% : 9501.289us 00:30:17.485 95.00000% : 10302.603us 00:30:17.485 98.00000% : 13965.750us 00:30:17.485 99.00000% : 14996.010us 00:30:17.485 99.50000% : 33426.222us 00:30:17.485 99.90000% : 39836.730us 00:30:17.485 99.99000% : 40294.624us 00:30:17.485 99.99900% : 40294.624us 00:30:17.485 99.99990% : 40294.624us 00:30:17.485 99.99999% : 40294.624us 00:30:17.485 00:30:17.485 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:17.485 ================================================================================= 00:30:17.485 1.00000% : 7297.677us 00:30:17.485 10.00000% : 7555.242us 00:30:17.485 25.00000% : 7726.952us 00:30:17.485 50.00000% : 8070.372us 00:30:17.485 75.00000% : 8413.792us 00:30:17.485 90.00000% : 9501.289us 00:30:17.485 95.00000% : 10359.839us 00:30:17.485 98.00000% : 14080.224us 00:30:17.485 99.00000% : 14996.010us 00:30:17.485 99.50000% : 31136.755us 00:30:17.485 99.90000% : 37547.263us 00:30:17.485 99.99000% : 37776.210us 00:30:17.485 99.99900% : 38005.156us 00:30:17.485 99.99990% : 38005.156us 00:30:17.485 99.99999% : 38005.156us 00:30:17.485 00:30:17.485 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:17.485 ================================================================================= 00:30:17.485 1.00000% : 7297.677us 00:30:17.485 10.00000% : 7555.242us 00:30:17.485 25.00000% : 7784.189us 00:30:17.485 50.00000% : 8070.372us 00:30:17.485 75.00000% : 8413.792us 00:30:17.485 90.00000% : 9558.526us 00:30:17.485 95.00000% : 10531.549us 00:30:17.485 98.00000% : 13908.514us 00:30:17.485 99.00000% : 15453.904us 00:30:17.485 99.50000% : 23581.513us 00:30:17.485 99.90000% : 30220.968us 00:30:17.485 99.99000% : 30678.861us 00:30:17.485 99.99900% : 30678.861us 00:30:17.485 99.99990% : 30678.861us 00:30:17.485 99.99999% : 30678.861us 00:30:17.485 00:30:17.485 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:17.485 ============================================================================== 00:30:17.485 Range in us Cumulative IO count 00:30:17.485 6925.638 - 6954.257: 0.0067% ( 1) 00:30:17.485 6954.257 - 6982.875: 0.0467% ( 6) 00:30:17.485 6982.875 - 7011.493: 0.0801% ( 5) 00:30:17.485 7011.493 - 7040.112: 0.1335% ( 8) 00:30:17.485 7040.112 - 7068.730: 0.2270% ( 14) 00:30:17.485 7068.730 - 7097.348: 0.3272% ( 15) 00:30:17.485 7097.348 - 7125.967: 0.4741% ( 22) 00:30:17.485 7125.967 - 7154.585: 0.6878% ( 32) 00:30:17.485 7154.585 - 7183.203: 0.9882% ( 45) 00:30:17.485 7183.203 - 7211.822: 1.4490% ( 69) 00:30:17.485 7211.822 - 7240.440: 2.1568% ( 106) 00:30:17.485 7240.440 - 7269.059: 2.7511% ( 89) 00:30:17.485 7269.059 - 7297.677: 3.6926% ( 141) 00:30:17.485 7297.677 - 7326.295: 4.8077% ( 167) 00:30:17.485 7326.295 - 7383.532: 7.6122% ( 420) 00:30:17.485 7383.532 - 7440.769: 10.7105% ( 464) 00:30:17.485 7440.769 - 7498.005: 13.8021% ( 463) 00:30:17.485 7498.005 - 7555.242: 16.9071% ( 465) 00:30:17.485 7555.242 - 7612.479: 20.1322% ( 483) 00:30:17.485 7612.479 - 7669.715: 23.5777% ( 516) 00:30:17.485 7669.715 - 7726.952: 27.1167% ( 530) 00:30:17.485 7726.952 - 7784.189: 30.8360% ( 557) 00:30:17.485 7784.189 - 7841.425: 34.8892% ( 607) 00:30:17.485 7841.425 - 7898.662: 38.6752% ( 567) 00:30:17.485 7898.662 - 7955.899: 42.4212% ( 561) 00:30:17.485 7955.899 - 8013.135: 46.5144% ( 613) 00:30:17.485 8013.135 - 8070.372: 50.2938% ( 566) 00:30:17.485 8070.372 - 8127.609: 54.2668% ( 595) 00:30:17.485 8127.609 - 8184.845: 58.1464% ( 581) 00:30:17.485 8184.845 - 8242.082: 61.8657% ( 557) 00:30:17.485 8242.082 - 8299.319: 65.6450% ( 566) 00:30:17.485 8299.319 - 8356.555: 69.1506% ( 525) 00:30:17.485 8356.555 - 8413.792: 72.3491% ( 479) 00:30:17.485 8413.792 - 8471.029: 75.1736% ( 423) 00:30:17.485 8471.029 - 8528.266: 77.4306% ( 338) 00:30:17.485 8528.266 - 8585.502: 79.0598% ( 244) 00:30:17.485 8585.502 - 8642.739: 80.2885% ( 184) 00:30:17.485 8642.739 - 8699.976: 81.3568% ( 160) 00:30:17.485 8699.976 - 8757.212: 82.3584% ( 150) 00:30:17.485 8757.212 - 8814.449: 83.2999% ( 141) 00:30:17.485 8814.449 - 8871.686: 84.1346% ( 125) 00:30:17.485 8871.686 - 8928.922: 84.9426% ( 121) 00:30:17.485 8928.922 - 8986.159: 85.5569% ( 92) 00:30:17.485 8986.159 - 9043.396: 86.1779% ( 93) 00:30:17.485 9043.396 - 9100.632: 86.7655% ( 88) 00:30:17.485 9100.632 - 9157.869: 87.2663% ( 75) 00:30:17.485 9157.869 - 9215.106: 87.7337% ( 70) 00:30:17.485 9215.106 - 9272.342: 88.2479% ( 77) 00:30:17.485 9272.342 - 9329.579: 88.7754% ( 79) 00:30:17.485 9329.579 - 9386.816: 89.2695% ( 74) 00:30:17.485 9386.816 - 9444.052: 89.8037% ( 80) 00:30:17.485 9444.052 - 9501.289: 90.2978% ( 74) 00:30:17.485 9501.289 - 9558.526: 90.8120% ( 77) 00:30:17.485 9558.526 - 9615.762: 91.2994% ( 73) 00:30:17.485 9615.762 - 9672.999: 91.7802% ( 72) 00:30:17.485 9672.999 - 9730.236: 92.2676% ( 73) 00:30:17.485 9730.236 - 9787.472: 92.7417% ( 71) 00:30:17.485 9787.472 - 9844.709: 93.1824% ( 66) 00:30:17.485 9844.709 - 9901.946: 93.5831% ( 60) 00:30:17.485 9901.946 - 9959.183: 93.9169% ( 50) 00:30:17.485 9959.183 - 10016.419: 94.1907% ( 41) 00:30:17.485 10016.419 - 10073.656: 94.4578% ( 40) 00:30:17.485 10073.656 - 10130.893: 94.6114% ( 23) 00:30:17.485 10130.893 - 10188.129: 94.7917% ( 27) 00:30:17.485 10188.129 - 10245.366: 94.9119% ( 18) 00:30:17.485 10245.366 - 10302.603: 94.9987% ( 13) 00:30:17.485 10302.603 - 10359.839: 95.0588% ( 9) 00:30:17.485 10359.839 - 10417.076: 95.1322% ( 11) 00:30:17.485 10417.076 - 10474.313: 95.2057% ( 11) 00:30:17.485 10474.313 - 10531.549: 95.2858% ( 12) 00:30:17.485 10531.549 - 10588.786: 95.3392% ( 8) 00:30:17.485 10588.786 - 10646.023: 95.4193% ( 12) 00:30:17.485 10646.023 - 10703.259: 95.4995% ( 12) 00:30:17.485 10703.259 - 10760.496: 95.5729% ( 11) 00:30:17.485 10760.496 - 10817.733: 95.6263% ( 8) 00:30:17.485 10817.733 - 10874.969: 95.6731% ( 7) 00:30:17.485 10874.969 - 10932.206: 95.7332% ( 9) 00:30:17.485 10932.206 - 10989.443: 95.7866% ( 8) 00:30:17.485 10989.443 - 11046.679: 95.8600% ( 11) 00:30:17.485 11046.679 - 11103.916: 95.9135% ( 8) 00:30:17.485 11103.916 - 11161.153: 95.9802% ( 10) 00:30:17.485 11161.153 - 11218.390: 96.0537% ( 11) 00:30:17.485 11218.390 - 11275.626: 96.1071% ( 8) 00:30:17.485 11275.626 - 11332.863: 96.1806% ( 11) 00:30:17.485 11332.863 - 11390.100: 96.2273% ( 7) 00:30:17.485 11390.100 - 11447.336: 96.2674% ( 6) 00:30:17.485 11447.336 - 11504.573: 96.3141% ( 7) 00:30:17.485 11504.573 - 11561.810: 96.3675% ( 8) 00:30:17.485 11561.810 - 11619.046: 96.4276% ( 9) 00:30:17.485 11619.046 - 11676.283: 96.4476% ( 3) 00:30:17.485 11676.283 - 11733.520: 96.4944% ( 7) 00:30:17.485 11733.520 - 11790.756: 96.5545% ( 9) 00:30:17.485 11790.756 - 11847.993: 96.5879% ( 5) 00:30:17.485 11847.993 - 11905.230: 96.6346% ( 7) 00:30:17.485 11905.230 - 11962.466: 96.6814% ( 7) 00:30:17.485 11962.466 - 12019.703: 96.7281% ( 7) 00:30:17.485 12019.703 - 12076.940: 96.7748% ( 7) 00:30:17.485 12076.940 - 12134.176: 96.8283% ( 8) 00:30:17.485 12134.176 - 12191.413: 96.8884% ( 9) 00:30:17.485 12191.413 - 12248.650: 96.9485% ( 9) 00:30:17.485 12248.650 - 12305.886: 97.0019% ( 8) 00:30:17.485 12305.886 - 12363.123: 97.0553% ( 8) 00:30:17.486 12363.123 - 12420.360: 97.0954% ( 6) 00:30:17.486 12420.360 - 12477.597: 97.1421% ( 7) 00:30:17.486 12477.597 - 12534.833: 97.1888% ( 7) 00:30:17.486 12534.833 - 12592.070: 97.2289% ( 6) 00:30:17.486 12592.070 - 12649.307: 97.2756% ( 7) 00:30:17.486 12649.307 - 12706.543: 97.3357% ( 9) 00:30:17.486 12706.543 - 12763.780: 97.3825% ( 7) 00:30:17.486 12763.780 - 12821.017: 97.4359% ( 8) 00:30:17.486 12821.017 - 12878.253: 97.4693% ( 5) 00:30:17.486 12878.253 - 12935.490: 97.4960% ( 4) 00:30:17.486 12935.490 - 12992.727: 97.5361% ( 6) 00:30:17.486 12992.727 - 13049.963: 97.5761% ( 6) 00:30:17.486 13049.963 - 13107.200: 97.6028% ( 4) 00:30:17.486 13107.200 - 13164.437: 97.6830% ( 12) 00:30:17.486 13164.437 - 13221.673: 97.7497% ( 10) 00:30:17.486 13221.673 - 13278.910: 97.8098% ( 9) 00:30:17.486 13278.910 - 13336.147: 97.8833% ( 11) 00:30:17.486 13336.147 - 13393.383: 97.9634% ( 12) 00:30:17.486 13393.383 - 13450.620: 98.0168% ( 8) 00:30:17.486 13450.620 - 13507.857: 98.0970% ( 12) 00:30:17.486 13507.857 - 13565.093: 98.1571% ( 9) 00:30:17.486 13565.093 - 13622.330: 98.1971% ( 6) 00:30:17.486 13622.330 - 13679.567: 98.2305% ( 5) 00:30:17.486 13679.567 - 13736.803: 98.2572% ( 4) 00:30:17.486 13736.803 - 13794.040: 98.2973% ( 6) 00:30:17.486 13794.040 - 13851.277: 98.3440% ( 7) 00:30:17.486 13851.277 - 13908.514: 98.3640% ( 3) 00:30:17.486 13908.514 - 13965.750: 98.4108% ( 7) 00:30:17.486 13965.750 - 14022.987: 98.4442% ( 5) 00:30:17.486 14022.987 - 14080.224: 98.4776% ( 5) 00:30:17.486 14080.224 - 14137.460: 98.5243% ( 7) 00:30:17.486 14137.460 - 14194.697: 98.5577% ( 5) 00:30:17.486 14194.697 - 14251.934: 98.5911% ( 5) 00:30:17.486 14251.934 - 14309.170: 98.6311% ( 6) 00:30:17.486 14309.170 - 14366.407: 98.6645% ( 5) 00:30:17.486 14366.407 - 14423.644: 98.6979% ( 5) 00:30:17.486 14423.644 - 14480.880: 98.7179% ( 3) 00:30:17.486 14652.590 - 14767.064: 98.7447% ( 4) 00:30:17.486 14767.064 - 14881.537: 98.7847% ( 6) 00:30:17.486 14881.537 - 14996.010: 98.8248% ( 6) 00:30:17.486 14996.010 - 15110.484: 98.8515% ( 4) 00:30:17.486 15110.484 - 15224.957: 98.8916% ( 6) 00:30:17.486 15224.957 - 15339.431: 98.9249% ( 5) 00:30:17.486 15339.431 - 15453.904: 98.9650% ( 6) 00:30:17.486 15453.904 - 15568.377: 98.9984% ( 5) 00:30:17.486 15568.377 - 15682.851: 99.0318% ( 5) 00:30:17.486 15682.851 - 15797.324: 99.0718% ( 6) 00:30:17.486 15797.324 - 15911.797: 99.1052% ( 5) 00:30:17.486 15911.797 - 16026.271: 99.1453% ( 6) 00:30:17.486 37776.210 - 38005.156: 99.1787% ( 5) 00:30:17.486 38005.156 - 38234.103: 99.2121% ( 5) 00:30:17.486 38234.103 - 38463.050: 99.2521% ( 6) 00:30:17.486 38463.050 - 38691.997: 99.2922% ( 6) 00:30:17.486 38691.997 - 38920.943: 99.3389% ( 7) 00:30:17.486 38920.943 - 39149.890: 99.3857% ( 7) 00:30:17.486 39149.890 - 39378.837: 99.4324% ( 7) 00:30:17.486 39378.837 - 39607.783: 99.4792% ( 7) 00:30:17.486 39607.783 - 39836.730: 99.5326% ( 8) 00:30:17.486 39836.730 - 40065.677: 99.5726% ( 6) 00:30:17.486 44415.665 - 44644.611: 99.5994% ( 4) 00:30:17.486 44644.611 - 44873.558: 99.6394% ( 6) 00:30:17.486 44873.558 - 45102.505: 99.6862% ( 7) 00:30:17.486 45102.505 - 45331.452: 99.7396% ( 8) 00:30:17.486 45331.452 - 45560.398: 99.7796% ( 6) 00:30:17.486 45560.398 - 45789.345: 99.8197% ( 6) 00:30:17.486 45789.345 - 46018.292: 99.8731% ( 8) 00:30:17.486 46018.292 - 46247.238: 99.9132% ( 6) 00:30:17.486 46247.238 - 46476.185: 99.9599% ( 7) 00:30:17.486 46476.185 - 46705.132: 99.9933% ( 5) 00:30:17.486 46705.132 - 46934.079: 100.0000% ( 1) 00:30:17.486 00:30:17.486 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:17.486 ============================================================================== 00:30:17.486 Range in us Cumulative IO count 00:30:17.486 7011.493 - 7040.112: 0.0134% ( 2) 00:30:17.486 7040.112 - 7068.730: 0.0401% ( 4) 00:30:17.486 7068.730 - 7097.348: 0.0735% ( 5) 00:30:17.486 7097.348 - 7125.967: 0.1469% ( 11) 00:30:17.486 7125.967 - 7154.585: 0.2404% ( 14) 00:30:17.486 7154.585 - 7183.203: 0.3005% ( 9) 00:30:17.486 7183.203 - 7211.822: 0.4674% ( 25) 00:30:17.486 7211.822 - 7240.440: 0.6611% ( 29) 00:30:17.486 7240.440 - 7269.059: 0.9148% ( 38) 00:30:17.486 7269.059 - 7297.677: 1.3822% ( 70) 00:30:17.486 7297.677 - 7326.295: 2.0499% ( 100) 00:30:17.486 7326.295 - 7383.532: 3.8395% ( 268) 00:30:17.486 7383.532 - 7440.769: 6.3168% ( 371) 00:30:17.486 7440.769 - 7498.005: 9.7222% ( 510) 00:30:17.486 7498.005 - 7555.242: 13.6819% ( 593) 00:30:17.486 7555.242 - 7612.479: 17.3544% ( 550) 00:30:17.486 7612.479 - 7669.715: 21.0470% ( 553) 00:30:17.486 7669.715 - 7726.952: 25.1803% ( 619) 00:30:17.486 7726.952 - 7784.189: 29.2468% ( 609) 00:30:17.486 7784.189 - 7841.425: 33.5737% ( 648) 00:30:17.486 7841.425 - 7898.662: 38.0676% ( 673) 00:30:17.486 7898.662 - 7955.899: 42.7150% ( 696) 00:30:17.486 7955.899 - 8013.135: 47.3357% ( 692) 00:30:17.486 8013.135 - 8070.372: 51.7161% ( 656) 00:30:17.486 8070.372 - 8127.609: 56.1765% ( 668) 00:30:17.486 8127.609 - 8184.845: 60.6704% ( 673) 00:30:17.486 8184.845 - 8242.082: 64.8972% ( 633) 00:30:17.486 8242.082 - 8299.319: 68.8235% ( 588) 00:30:17.486 8299.319 - 8356.555: 72.0419% ( 482) 00:30:17.486 8356.555 - 8413.792: 74.9733% ( 439) 00:30:17.486 8413.792 - 8471.029: 77.0833% ( 316) 00:30:17.486 8471.029 - 8528.266: 78.5323% ( 217) 00:30:17.486 8528.266 - 8585.502: 79.7610% ( 184) 00:30:17.486 8585.502 - 8642.739: 80.8494% ( 163) 00:30:17.486 8642.739 - 8699.976: 81.8376% ( 148) 00:30:17.486 8699.976 - 8757.212: 82.7257% ( 133) 00:30:17.486 8757.212 - 8814.449: 83.4135% ( 103) 00:30:17.486 8814.449 - 8871.686: 84.0612% ( 97) 00:30:17.486 8871.686 - 8928.922: 84.5954% ( 80) 00:30:17.486 8928.922 - 8986.159: 85.0628% ( 70) 00:30:17.486 8986.159 - 9043.396: 85.6370% ( 86) 00:30:17.486 9043.396 - 9100.632: 86.1645% ( 79) 00:30:17.486 9100.632 - 9157.869: 86.7455% ( 87) 00:30:17.486 9157.869 - 9215.106: 87.3464% ( 90) 00:30:17.486 9215.106 - 9272.342: 87.9607% ( 92) 00:30:17.486 9272.342 - 9329.579: 88.5550% ( 89) 00:30:17.486 9329.579 - 9386.816: 89.2361% ( 102) 00:30:17.486 9386.816 - 9444.052: 89.8438% ( 91) 00:30:17.486 9444.052 - 9501.289: 90.5048% ( 99) 00:30:17.486 9501.289 - 9558.526: 91.0924% ( 88) 00:30:17.486 9558.526 - 9615.762: 91.6600% ( 85) 00:30:17.486 9615.762 - 9672.999: 92.2409% ( 87) 00:30:17.486 9672.999 - 9730.236: 92.7284% ( 73) 00:30:17.486 9730.236 - 9787.472: 93.1958% ( 70) 00:30:17.486 9787.472 - 9844.709: 93.6165% ( 63) 00:30:17.486 9844.709 - 9901.946: 94.0037% ( 58) 00:30:17.486 9901.946 - 9959.183: 94.3576% ( 53) 00:30:17.486 9959.183 - 10016.419: 94.5780% ( 33) 00:30:17.486 10016.419 - 10073.656: 94.7783% ( 30) 00:30:17.486 10073.656 - 10130.893: 94.9519% ( 26) 00:30:17.486 10130.893 - 10188.129: 95.0387% ( 13) 00:30:17.486 10188.129 - 10245.366: 95.1055% ( 10) 00:30:17.486 10245.366 - 10302.603: 95.1522% ( 7) 00:30:17.486 10302.603 - 10359.839: 95.1923% ( 6) 00:30:17.486 10359.839 - 10417.076: 95.2324% ( 6) 00:30:17.486 10417.076 - 10474.313: 95.2724% ( 6) 00:30:17.486 10474.313 - 10531.549: 95.3058% ( 5) 00:30:17.486 10531.549 - 10588.786: 95.3592% ( 8) 00:30:17.486 10588.786 - 10646.023: 95.4060% ( 7) 00:30:17.486 10646.023 - 10703.259: 95.4460% ( 6) 00:30:17.486 10703.259 - 10760.496: 95.5128% ( 10) 00:30:17.486 10760.496 - 10817.733: 95.5662% ( 8) 00:30:17.486 10817.733 - 10874.969: 95.6330% ( 10) 00:30:17.486 10874.969 - 10932.206: 95.6931% ( 9) 00:30:17.486 10932.206 - 10989.443: 95.7599% ( 10) 00:30:17.486 10989.443 - 11046.679: 95.8200% ( 9) 00:30:17.486 11046.679 - 11103.916: 95.8801% ( 9) 00:30:17.486 11103.916 - 11161.153: 95.9468% ( 10) 00:30:17.486 11161.153 - 11218.390: 96.0136% ( 10) 00:30:17.486 11218.390 - 11275.626: 96.0670% ( 8) 00:30:17.486 11275.626 - 11332.863: 96.1338% ( 10) 00:30:17.486 11332.863 - 11390.100: 96.2006% ( 10) 00:30:17.486 11390.100 - 11447.336: 96.2607% ( 9) 00:30:17.486 11447.336 - 11504.573: 96.3275% ( 10) 00:30:17.486 11504.573 - 11561.810: 96.3876% ( 9) 00:30:17.486 11561.810 - 11619.046: 96.4543% ( 10) 00:30:17.486 11619.046 - 11676.283: 96.5011% ( 7) 00:30:17.486 11676.283 - 11733.520: 96.5678% ( 10) 00:30:17.486 11733.520 - 11790.756: 96.6146% ( 7) 00:30:17.486 11790.756 - 11847.993: 96.6213% ( 1) 00:30:17.486 11847.993 - 11905.230: 96.6546% ( 5) 00:30:17.486 11905.230 - 11962.466: 96.6880% ( 5) 00:30:17.486 11962.466 - 12019.703: 96.7214% ( 5) 00:30:17.486 12019.703 - 12076.940: 96.7615% ( 6) 00:30:17.486 12076.940 - 12134.176: 96.8015% ( 6) 00:30:17.486 12134.176 - 12191.413: 96.8349% ( 5) 00:30:17.486 12191.413 - 12248.650: 96.8683% ( 5) 00:30:17.486 12248.650 - 12305.886: 96.9084% ( 6) 00:30:17.486 12305.886 - 12363.123: 96.9418% ( 5) 00:30:17.486 12363.123 - 12420.360: 96.9685% ( 4) 00:30:17.486 12420.360 - 12477.597: 97.0085% ( 6) 00:30:17.486 12477.597 - 12534.833: 97.0353% ( 4) 00:30:17.486 12534.833 - 12592.070: 97.0753% ( 6) 00:30:17.486 12592.070 - 12649.307: 97.1087% ( 5) 00:30:17.486 12649.307 - 12706.543: 97.1421% ( 5) 00:30:17.486 12706.543 - 12763.780: 97.1688% ( 4) 00:30:17.486 12763.780 - 12821.017: 97.2155% ( 7) 00:30:17.486 12821.017 - 12878.253: 97.2957% ( 12) 00:30:17.486 12878.253 - 12935.490: 97.3691% ( 11) 00:30:17.486 12935.490 - 12992.727: 97.4225% ( 8) 00:30:17.486 12992.727 - 13049.963: 97.5027% ( 12) 00:30:17.486 13049.963 - 13107.200: 97.5761% ( 11) 00:30:17.486 13107.200 - 13164.437: 97.6295% ( 8) 00:30:17.486 13164.437 - 13221.673: 97.6830% ( 8) 00:30:17.487 13221.673 - 13278.910: 97.7364% ( 8) 00:30:17.487 13278.910 - 13336.147: 97.8032% ( 10) 00:30:17.487 13336.147 - 13393.383: 97.8699% ( 10) 00:30:17.487 13393.383 - 13450.620: 97.9167% ( 7) 00:30:17.487 13450.620 - 13507.857: 97.9701% ( 8) 00:30:17.487 13507.857 - 13565.093: 98.0101% ( 6) 00:30:17.487 13565.093 - 13622.330: 98.0502% ( 6) 00:30:17.487 13622.330 - 13679.567: 98.0903% ( 6) 00:30:17.487 13679.567 - 13736.803: 98.1370% ( 7) 00:30:17.487 13736.803 - 13794.040: 98.1904% ( 8) 00:30:17.487 13794.040 - 13851.277: 98.2505% ( 9) 00:30:17.487 13851.277 - 13908.514: 98.3106% ( 9) 00:30:17.487 13908.514 - 13965.750: 98.3574% ( 7) 00:30:17.487 13965.750 - 14022.987: 98.3908% ( 5) 00:30:17.487 14022.987 - 14080.224: 98.4108% ( 3) 00:30:17.487 14080.224 - 14137.460: 98.4375% ( 4) 00:30:17.487 14137.460 - 14194.697: 98.4575% ( 3) 00:30:17.487 14194.697 - 14251.934: 98.4776% ( 3) 00:30:17.487 14251.934 - 14309.170: 98.4976% ( 3) 00:30:17.487 14309.170 - 14366.407: 98.5243% ( 4) 00:30:17.487 14366.407 - 14423.644: 98.5577% ( 5) 00:30:17.487 14423.644 - 14480.880: 98.5911% ( 5) 00:30:17.487 14480.880 - 14538.117: 98.6378% ( 7) 00:30:17.487 14538.117 - 14595.354: 98.6779% ( 6) 00:30:17.487 14595.354 - 14652.590: 98.7246% ( 7) 00:30:17.487 14652.590 - 14767.064: 98.8114% ( 13) 00:30:17.487 14767.064 - 14881.537: 98.8982% ( 13) 00:30:17.487 14881.537 - 14996.010: 98.9517% ( 8) 00:30:17.487 14996.010 - 15110.484: 98.9917% ( 6) 00:30:17.487 15110.484 - 15224.957: 99.0385% ( 7) 00:30:17.487 15224.957 - 15339.431: 99.0852% ( 7) 00:30:17.487 15339.431 - 15453.904: 99.1253% ( 6) 00:30:17.487 15453.904 - 15568.377: 99.1453% ( 3) 00:30:17.487 35715.689 - 35944.636: 99.1653% ( 3) 00:30:17.487 35944.636 - 36173.583: 99.2121% ( 7) 00:30:17.487 36173.583 - 36402.529: 99.2455% ( 5) 00:30:17.487 36402.529 - 36631.476: 99.2922% ( 7) 00:30:17.487 36631.476 - 36860.423: 99.3389% ( 7) 00:30:17.487 36860.423 - 37089.369: 99.3857% ( 7) 00:30:17.487 37089.369 - 37318.316: 99.4324% ( 7) 00:30:17.487 37318.316 - 37547.263: 99.4858% ( 8) 00:30:17.487 37547.263 - 37776.210: 99.5326% ( 7) 00:30:17.487 37776.210 - 38005.156: 99.5726% ( 6) 00:30:17.487 42126.197 - 42355.144: 99.5927% ( 3) 00:30:17.487 42355.144 - 42584.091: 99.6461% ( 8) 00:30:17.487 42584.091 - 42813.038: 99.6928% ( 7) 00:30:17.487 42813.038 - 43041.984: 99.7463% ( 8) 00:30:17.487 43041.984 - 43270.931: 99.7930% ( 7) 00:30:17.487 43270.931 - 43499.878: 99.8397% ( 7) 00:30:17.487 43499.878 - 43728.824: 99.8932% ( 8) 00:30:17.487 43728.824 - 43957.771: 99.9466% ( 8) 00:30:17.487 43957.771 - 44186.718: 99.9933% ( 7) 00:30:17.487 44186.718 - 44415.665: 100.0000% ( 1) 00:30:17.487 00:30:17.487 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:17.487 ============================================================================== 00:30:17.487 Range in us Cumulative IO count 00:30:17.487 7011.493 - 7040.112: 0.0067% ( 1) 00:30:17.487 7040.112 - 7068.730: 0.0467% ( 6) 00:30:17.487 7068.730 - 7097.348: 0.0801% ( 5) 00:30:17.487 7097.348 - 7125.967: 0.1335% ( 8) 00:30:17.487 7125.967 - 7154.585: 0.2471% ( 17) 00:30:17.487 7154.585 - 7183.203: 0.3339% ( 13) 00:30:17.487 7183.203 - 7211.822: 0.5275% ( 29) 00:30:17.487 7211.822 - 7240.440: 0.7545% ( 34) 00:30:17.487 7240.440 - 7269.059: 1.0283% ( 41) 00:30:17.487 7269.059 - 7297.677: 1.3956% ( 55) 00:30:17.487 7297.677 - 7326.295: 1.8563% ( 69) 00:30:17.487 7326.295 - 7383.532: 3.9129% ( 308) 00:30:17.487 7383.532 - 7440.769: 6.5572% ( 396) 00:30:17.487 7440.769 - 7498.005: 9.8825% ( 498) 00:30:17.487 7498.005 - 7555.242: 13.4682% ( 537) 00:30:17.487 7555.242 - 7612.479: 17.1408% ( 550) 00:30:17.487 7612.479 - 7669.715: 20.9335% ( 568) 00:30:17.487 7669.715 - 7726.952: 24.8932% ( 593) 00:30:17.487 7726.952 - 7784.189: 29.0532% ( 623) 00:30:17.487 7784.189 - 7841.425: 33.5003% ( 666) 00:30:17.487 7841.425 - 7898.662: 38.0809% ( 686) 00:30:17.487 7898.662 - 7955.899: 42.6683% ( 687) 00:30:17.487 7955.899 - 8013.135: 47.2756% ( 690) 00:30:17.487 8013.135 - 8070.372: 51.7628% ( 672) 00:30:17.487 8070.372 - 8127.609: 56.2500% ( 672) 00:30:17.487 8127.609 - 8184.845: 60.6103% ( 653) 00:30:17.487 8184.845 - 8242.082: 64.8972% ( 642) 00:30:17.487 8242.082 - 8299.319: 68.7634% ( 579) 00:30:17.487 8299.319 - 8356.555: 72.0753% ( 496) 00:30:17.487 8356.555 - 8413.792: 74.8264% ( 412) 00:30:17.487 8413.792 - 8471.029: 77.0099% ( 327) 00:30:17.487 8471.029 - 8528.266: 78.6725% ( 249) 00:30:17.487 8528.266 - 8585.502: 79.8611% ( 178) 00:30:17.487 8585.502 - 8642.739: 80.9161% ( 158) 00:30:17.487 8642.739 - 8699.976: 81.8376% ( 138) 00:30:17.487 8699.976 - 8757.212: 82.7324% ( 134) 00:30:17.487 8757.212 - 8814.449: 83.3934% ( 99) 00:30:17.487 8814.449 - 8871.686: 83.9810% ( 88) 00:30:17.487 8871.686 - 8928.922: 84.5152% ( 80) 00:30:17.487 8928.922 - 8986.159: 85.0294% ( 77) 00:30:17.487 8986.159 - 9043.396: 85.5235% ( 74) 00:30:17.487 9043.396 - 9100.632: 86.0577% ( 80) 00:30:17.487 9100.632 - 9157.869: 86.6987% ( 96) 00:30:17.487 9157.869 - 9215.106: 87.3331% ( 95) 00:30:17.487 9215.106 - 9272.342: 87.9674% ( 95) 00:30:17.487 9272.342 - 9329.579: 88.6018% ( 95) 00:30:17.487 9329.579 - 9386.816: 89.2094% ( 91) 00:30:17.487 9386.816 - 9444.052: 89.7837% ( 86) 00:30:17.487 9444.052 - 9501.289: 90.3913% ( 91) 00:30:17.487 9501.289 - 9558.526: 90.9522% ( 84) 00:30:17.487 9558.526 - 9615.762: 91.4797% ( 79) 00:30:17.487 9615.762 - 9672.999: 92.0873% ( 91) 00:30:17.487 9672.999 - 9730.236: 92.6015% ( 77) 00:30:17.487 9730.236 - 9787.472: 93.0889% ( 73) 00:30:17.487 9787.472 - 9844.709: 93.5630% ( 71) 00:30:17.487 9844.709 - 9901.946: 93.9971% ( 65) 00:30:17.487 9901.946 - 9959.183: 94.3309% ( 50) 00:30:17.487 9959.183 - 10016.419: 94.5847% ( 38) 00:30:17.487 10016.419 - 10073.656: 94.7783% ( 29) 00:30:17.487 10073.656 - 10130.893: 94.9519% ( 26) 00:30:17.487 10130.893 - 10188.129: 95.0921% ( 21) 00:30:17.487 10188.129 - 10245.366: 95.1990% ( 16) 00:30:17.487 10245.366 - 10302.603: 95.2791% ( 12) 00:30:17.487 10302.603 - 10359.839: 95.3659% ( 13) 00:30:17.487 10359.839 - 10417.076: 95.4193% ( 8) 00:30:17.487 10417.076 - 10474.313: 95.4861% ( 10) 00:30:17.487 10474.313 - 10531.549: 95.5462% ( 9) 00:30:17.487 10531.549 - 10588.786: 95.6130% ( 10) 00:30:17.487 10588.786 - 10646.023: 95.6664% ( 8) 00:30:17.487 10646.023 - 10703.259: 95.7065% ( 6) 00:30:17.487 10703.259 - 10760.496: 95.7465% ( 6) 00:30:17.487 10760.496 - 10817.733: 95.7866% ( 6) 00:30:17.487 10817.733 - 10874.969: 95.8267% ( 6) 00:30:17.487 10874.969 - 10932.206: 95.8667% ( 6) 00:30:17.487 10932.206 - 10989.443: 95.9001% ( 5) 00:30:17.487 10989.443 - 11046.679: 95.9402% ( 6) 00:30:17.487 11046.679 - 11103.916: 95.9802% ( 6) 00:30:17.487 11103.916 - 11161.153: 96.0203% ( 6) 00:30:17.487 11161.153 - 11218.390: 96.0604% ( 6) 00:30:17.487 11218.390 - 11275.626: 96.1004% ( 6) 00:30:17.487 11275.626 - 11332.863: 96.1271% ( 4) 00:30:17.487 11332.863 - 11390.100: 96.1472% ( 3) 00:30:17.487 11390.100 - 11447.336: 96.1538% ( 1) 00:30:17.487 11447.336 - 11504.573: 96.1806% ( 4) 00:30:17.487 11504.573 - 11561.810: 96.2073% ( 4) 00:30:17.487 11561.810 - 11619.046: 96.2273% ( 3) 00:30:17.487 11619.046 - 11676.283: 96.2740% ( 7) 00:30:17.487 11676.283 - 11733.520: 96.3141% ( 6) 00:30:17.487 11733.520 - 11790.756: 96.3608% ( 7) 00:30:17.487 11790.756 - 11847.993: 96.4076% ( 7) 00:30:17.487 11847.993 - 11905.230: 96.4476% ( 6) 00:30:17.487 11905.230 - 11962.466: 96.4944% ( 7) 00:30:17.487 11962.466 - 12019.703: 96.5478% ( 8) 00:30:17.487 12019.703 - 12076.940: 96.6279% ( 12) 00:30:17.487 12076.940 - 12134.176: 96.6880% ( 9) 00:30:17.487 12134.176 - 12191.413: 96.7481% ( 9) 00:30:17.487 12191.413 - 12248.650: 96.8082% ( 9) 00:30:17.487 12248.650 - 12305.886: 96.8550% ( 7) 00:30:17.487 12305.886 - 12363.123: 96.9217% ( 10) 00:30:17.487 12363.123 - 12420.360: 96.9885% ( 10) 00:30:17.487 12420.360 - 12477.597: 97.0620% ( 11) 00:30:17.487 12477.597 - 12534.833: 97.1154% ( 8) 00:30:17.487 12534.833 - 12592.070: 97.1822% ( 10) 00:30:17.487 12592.070 - 12649.307: 97.2289% ( 7) 00:30:17.487 12649.307 - 12706.543: 97.2890% ( 9) 00:30:17.487 12706.543 - 12763.780: 97.3491% ( 9) 00:30:17.487 12763.780 - 12821.017: 97.4025% ( 8) 00:30:17.487 12821.017 - 12878.253: 97.4493% ( 7) 00:30:17.487 12878.253 - 12935.490: 97.4826% ( 5) 00:30:17.487 12935.490 - 12992.727: 97.5227% ( 6) 00:30:17.487 12992.727 - 13049.963: 97.5494% ( 4) 00:30:17.487 13049.963 - 13107.200: 97.5828% ( 5) 00:30:17.487 13107.200 - 13164.437: 97.6229% ( 6) 00:30:17.487 13164.437 - 13221.673: 97.6629% ( 6) 00:30:17.487 13221.673 - 13278.910: 97.6963% ( 5) 00:30:17.487 13278.910 - 13336.147: 97.7431% ( 7) 00:30:17.487 13336.147 - 13393.383: 97.7898% ( 7) 00:30:17.487 13393.383 - 13450.620: 97.8632% ( 11) 00:30:17.487 13450.620 - 13507.857: 97.9233% ( 9) 00:30:17.487 13507.857 - 13565.093: 97.9434% ( 3) 00:30:17.487 13565.093 - 13622.330: 97.9768% ( 5) 00:30:17.487 13622.330 - 13679.567: 98.0035% ( 4) 00:30:17.487 13679.567 - 13736.803: 98.0302% ( 4) 00:30:17.487 13736.803 - 13794.040: 98.0702% ( 6) 00:30:17.487 13794.040 - 13851.277: 98.0970% ( 4) 00:30:17.487 13851.277 - 13908.514: 98.1237% ( 4) 00:30:17.487 13908.514 - 13965.750: 98.1637% ( 6) 00:30:17.487 13965.750 - 14022.987: 98.2171% ( 8) 00:30:17.487 14022.987 - 14080.224: 98.2772% ( 9) 00:30:17.488 14080.224 - 14137.460: 98.3307% ( 8) 00:30:17.488 14137.460 - 14194.697: 98.3974% ( 10) 00:30:17.488 14194.697 - 14251.934: 98.4575% ( 9) 00:30:17.488 14251.934 - 14309.170: 98.5243% ( 10) 00:30:17.488 14309.170 - 14366.407: 98.5844% ( 9) 00:30:17.488 14366.407 - 14423.644: 98.6445% ( 9) 00:30:17.488 14423.644 - 14480.880: 98.6846% ( 6) 00:30:17.488 14480.880 - 14538.117: 98.7246% ( 6) 00:30:17.488 14538.117 - 14595.354: 98.7647% ( 6) 00:30:17.488 14595.354 - 14652.590: 98.8114% ( 7) 00:30:17.488 14652.590 - 14767.064: 98.8982% ( 13) 00:30:17.488 14767.064 - 14881.537: 98.9850% ( 13) 00:30:17.488 14881.537 - 14996.010: 99.0718% ( 13) 00:30:17.488 14996.010 - 15110.484: 99.1386% ( 10) 00:30:17.488 15110.484 - 15224.957: 99.1453% ( 1) 00:30:17.488 33884.115 - 34113.062: 99.1653% ( 3) 00:30:17.488 34113.062 - 34342.009: 99.2121% ( 7) 00:30:17.488 34342.009 - 34570.955: 99.2588% ( 7) 00:30:17.488 34570.955 - 34799.902: 99.3056% ( 7) 00:30:17.488 34799.902 - 35028.849: 99.3590% ( 8) 00:30:17.488 35028.849 - 35257.796: 99.4057% ( 7) 00:30:17.488 35257.796 - 35486.742: 99.4525% ( 7) 00:30:17.488 35486.742 - 35715.689: 99.5059% ( 8) 00:30:17.488 35715.689 - 35944.636: 99.5459% ( 6) 00:30:17.488 35944.636 - 36173.583: 99.5726% ( 4) 00:30:17.488 40294.624 - 40523.570: 99.5994% ( 4) 00:30:17.488 40523.570 - 40752.517: 99.6461% ( 7) 00:30:17.488 40752.517 - 40981.464: 99.6862% ( 6) 00:30:17.488 40981.464 - 41210.410: 99.7396% ( 8) 00:30:17.488 41210.410 - 41439.357: 99.7863% ( 7) 00:30:17.488 41439.357 - 41668.304: 99.8331% ( 7) 00:30:17.488 41668.304 - 41897.251: 99.8798% ( 7) 00:30:17.488 41897.251 - 42126.197: 99.9265% ( 7) 00:30:17.488 42126.197 - 42355.144: 99.9800% ( 8) 00:30:17.488 42355.144 - 42584.091: 100.0000% ( 3) 00:30:17.488 00:30:17.488 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:17.488 ============================================================================== 00:30:17.488 Range in us Cumulative IO count 00:30:17.488 7011.493 - 7040.112: 0.0267% ( 4) 00:30:17.488 7040.112 - 7068.730: 0.0534% ( 4) 00:30:17.488 7068.730 - 7097.348: 0.0935% ( 6) 00:30:17.488 7097.348 - 7125.967: 0.1402% ( 7) 00:30:17.488 7125.967 - 7154.585: 0.2204% ( 12) 00:30:17.488 7154.585 - 7183.203: 0.3272% ( 16) 00:30:17.488 7183.203 - 7211.822: 0.4474% ( 18) 00:30:17.488 7211.822 - 7240.440: 0.6010% ( 23) 00:30:17.488 7240.440 - 7269.059: 0.8480% ( 37) 00:30:17.488 7269.059 - 7297.677: 1.2620% ( 62) 00:30:17.488 7297.677 - 7326.295: 1.7495% ( 73) 00:30:17.488 7326.295 - 7383.532: 3.4188% ( 250) 00:30:17.488 7383.532 - 7440.769: 6.1098% ( 403) 00:30:17.488 7440.769 - 7498.005: 9.6287% ( 527) 00:30:17.488 7498.005 - 7555.242: 13.3547% ( 558) 00:30:17.488 7555.242 - 7612.479: 17.0005% ( 546) 00:30:17.488 7612.479 - 7669.715: 20.9001% ( 584) 00:30:17.488 7669.715 - 7726.952: 24.8865% ( 597) 00:30:17.488 7726.952 - 7784.189: 28.7727% ( 582) 00:30:17.488 7784.189 - 7841.425: 33.3267% ( 682) 00:30:17.488 7841.425 - 7898.662: 37.8940% ( 684) 00:30:17.488 7898.662 - 7955.899: 42.7217% ( 723) 00:30:17.488 7955.899 - 8013.135: 47.3224% ( 689) 00:30:17.488 8013.135 - 8070.372: 51.8296% ( 675) 00:30:17.488 8070.372 - 8127.609: 56.4637% ( 694) 00:30:17.488 8127.609 - 8184.845: 60.8507% ( 657) 00:30:17.488 8184.845 - 8242.082: 65.1042% ( 637) 00:30:17.488 8242.082 - 8299.319: 69.0572% ( 592) 00:30:17.488 8299.319 - 8356.555: 72.4426% ( 507) 00:30:17.488 8356.555 - 8413.792: 75.1936% ( 412) 00:30:17.488 8413.792 - 8471.029: 77.3838% ( 328) 00:30:17.488 8471.029 - 8528.266: 78.8795% ( 224) 00:30:17.488 8528.266 - 8585.502: 80.0681% ( 178) 00:30:17.488 8585.502 - 8642.739: 81.1031% ( 155) 00:30:17.488 8642.739 - 8699.976: 82.0847% ( 147) 00:30:17.488 8699.976 - 8757.212: 82.8926% ( 121) 00:30:17.488 8757.212 - 8814.449: 83.4802% ( 88) 00:30:17.488 8814.449 - 8871.686: 83.9877% ( 76) 00:30:17.488 8871.686 - 8928.922: 84.5486% ( 84) 00:30:17.488 8928.922 - 8986.159: 85.0628% ( 77) 00:30:17.488 8986.159 - 9043.396: 85.5702% ( 76) 00:30:17.488 9043.396 - 9100.632: 86.1311% ( 84) 00:30:17.488 9100.632 - 9157.869: 86.7922% ( 99) 00:30:17.488 9157.869 - 9215.106: 87.4466% ( 98) 00:30:17.488 9215.106 - 9272.342: 88.0342% ( 88) 00:30:17.488 9272.342 - 9329.579: 88.6018% ( 85) 00:30:17.488 9329.579 - 9386.816: 89.2094% ( 91) 00:30:17.488 9386.816 - 9444.052: 89.7837% ( 86) 00:30:17.488 9444.052 - 9501.289: 90.3178% ( 80) 00:30:17.488 9501.289 - 9558.526: 90.8787% ( 84) 00:30:17.488 9558.526 - 9615.762: 91.4597% ( 87) 00:30:17.488 9615.762 - 9672.999: 91.9671% ( 76) 00:30:17.488 9672.999 - 9730.236: 92.4813% ( 77) 00:30:17.488 9730.236 - 9787.472: 92.9888% ( 76) 00:30:17.488 9787.472 - 9844.709: 93.4696% ( 72) 00:30:17.488 9844.709 - 9901.946: 93.8502% ( 57) 00:30:17.488 9901.946 - 9959.183: 94.1907% ( 51) 00:30:17.488 9959.183 - 10016.419: 94.4645% ( 41) 00:30:17.488 10016.419 - 10073.656: 94.6782% ( 32) 00:30:17.488 10073.656 - 10130.893: 94.8050% ( 19) 00:30:17.488 10130.893 - 10188.129: 94.8785% ( 11) 00:30:17.488 10188.129 - 10245.366: 94.9452% ( 10) 00:30:17.488 10245.366 - 10302.603: 95.0321% ( 13) 00:30:17.488 10302.603 - 10359.839: 95.1055% ( 11) 00:30:17.488 10359.839 - 10417.076: 95.1790% ( 11) 00:30:17.488 10417.076 - 10474.313: 95.2457% ( 10) 00:30:17.488 10474.313 - 10531.549: 95.3192% ( 11) 00:30:17.488 10531.549 - 10588.786: 95.3993% ( 12) 00:30:17.488 10588.786 - 10646.023: 95.4861% ( 13) 00:30:17.488 10646.023 - 10703.259: 95.5662% ( 12) 00:30:17.488 10703.259 - 10760.496: 95.6397% ( 11) 00:30:17.488 10760.496 - 10817.733: 95.7198% ( 12) 00:30:17.488 10817.733 - 10874.969: 95.7933% ( 11) 00:30:17.488 10874.969 - 10932.206: 95.8667% ( 11) 00:30:17.488 10932.206 - 10989.443: 95.9268% ( 9) 00:30:17.488 10989.443 - 11046.679: 95.9802% ( 8) 00:30:17.488 11046.679 - 11103.916: 96.0403% ( 9) 00:30:17.488 11103.916 - 11161.153: 96.0871% ( 7) 00:30:17.488 11161.153 - 11218.390: 96.1205% ( 5) 00:30:17.488 11218.390 - 11275.626: 96.1538% ( 5) 00:30:17.488 11390.100 - 11447.336: 96.1739% ( 3) 00:30:17.488 11447.336 - 11504.573: 96.1939% ( 3) 00:30:17.488 11504.573 - 11561.810: 96.2206% ( 4) 00:30:17.488 11561.810 - 11619.046: 96.2407% ( 3) 00:30:17.488 11619.046 - 11676.283: 96.2607% ( 3) 00:30:17.488 11676.283 - 11733.520: 96.2740% ( 2) 00:30:17.488 11733.520 - 11790.756: 96.2941% ( 3) 00:30:17.488 11790.756 - 11847.993: 96.3141% ( 3) 00:30:17.488 11847.993 - 11905.230: 96.3341% ( 3) 00:30:17.488 11905.230 - 11962.466: 96.3675% ( 5) 00:30:17.488 11962.466 - 12019.703: 96.4410% ( 11) 00:30:17.488 12019.703 - 12076.940: 96.5011% ( 9) 00:30:17.488 12076.940 - 12134.176: 96.5478% ( 7) 00:30:17.488 12134.176 - 12191.413: 96.6213% ( 11) 00:30:17.488 12191.413 - 12248.650: 96.6880% ( 10) 00:30:17.488 12248.650 - 12305.886: 96.7481% ( 9) 00:30:17.488 12305.886 - 12363.123: 96.8216% ( 11) 00:30:17.488 12363.123 - 12420.360: 96.8950% ( 11) 00:30:17.488 12420.360 - 12477.597: 96.9618% ( 10) 00:30:17.488 12477.597 - 12534.833: 97.0219% ( 9) 00:30:17.488 12534.833 - 12592.070: 97.0887% ( 10) 00:30:17.488 12592.070 - 12649.307: 97.1621% ( 11) 00:30:17.488 12649.307 - 12706.543: 97.2155% ( 8) 00:30:17.488 12706.543 - 12763.780: 97.2756% ( 9) 00:30:17.488 12763.780 - 12821.017: 97.3291% ( 8) 00:30:17.488 12821.017 - 12878.253: 97.3892% ( 9) 00:30:17.488 12878.253 - 12935.490: 97.4426% ( 8) 00:30:17.488 12935.490 - 12992.727: 97.4826% ( 6) 00:30:17.488 12992.727 - 13049.963: 97.5227% ( 6) 00:30:17.488 13049.963 - 13107.200: 97.5561% ( 5) 00:30:17.488 13107.200 - 13164.437: 97.5694% ( 2) 00:30:17.488 13164.437 - 13221.673: 97.5828% ( 2) 00:30:17.488 13221.673 - 13278.910: 97.5962% ( 2) 00:30:17.488 13278.910 - 13336.147: 97.6095% ( 2) 00:30:17.488 13336.147 - 13393.383: 97.6229% ( 2) 00:30:17.488 13393.383 - 13450.620: 97.6362% ( 2) 00:30:17.488 13450.620 - 13507.857: 97.6496% ( 2) 00:30:17.488 13507.857 - 13565.093: 97.6629% ( 2) 00:30:17.488 13565.093 - 13622.330: 97.6896% ( 4) 00:30:17.488 13622.330 - 13679.567: 97.7163% ( 4) 00:30:17.488 13679.567 - 13736.803: 97.7831% ( 10) 00:30:17.488 13736.803 - 13794.040: 97.8432% ( 9) 00:30:17.488 13794.040 - 13851.277: 97.8900% ( 7) 00:30:17.488 13851.277 - 13908.514: 97.9501% ( 9) 00:30:17.488 13908.514 - 13965.750: 98.0101% ( 9) 00:30:17.488 13965.750 - 14022.987: 98.0702% ( 9) 00:30:17.488 14022.987 - 14080.224: 98.1237% ( 8) 00:30:17.488 14080.224 - 14137.460: 98.1904% ( 10) 00:30:17.488 14137.460 - 14194.697: 98.2639% ( 11) 00:30:17.488 14194.697 - 14251.934: 98.3373% ( 11) 00:30:17.488 14251.934 - 14309.170: 98.4108% ( 11) 00:30:17.488 14309.170 - 14366.407: 98.4976% ( 13) 00:30:17.488 14366.407 - 14423.644: 98.5777% ( 12) 00:30:17.488 14423.644 - 14480.880: 98.6311% ( 8) 00:30:17.488 14480.880 - 14538.117: 98.6979% ( 10) 00:30:17.488 14538.117 - 14595.354: 98.7647% ( 10) 00:30:17.488 14595.354 - 14652.590: 98.8248% ( 9) 00:30:17.488 14652.590 - 14767.064: 98.9383% ( 17) 00:30:17.488 14767.064 - 14881.537: 98.9784% ( 6) 00:30:17.488 14881.537 - 14996.010: 99.0251% ( 7) 00:30:17.488 14996.010 - 15110.484: 99.0652% ( 6) 00:30:17.488 15110.484 - 15224.957: 99.1119% ( 7) 00:30:17.488 15224.957 - 15339.431: 99.1453% ( 5) 00:30:17.488 31594.648 - 31823.595: 99.1787% ( 5) 00:30:17.488 31823.595 - 32052.541: 99.2254% ( 7) 00:30:17.488 32052.541 - 32281.488: 99.2788% ( 8) 00:30:17.488 32281.488 - 32510.435: 99.3256% ( 7) 00:30:17.489 32510.435 - 32739.382: 99.3790% ( 8) 00:30:17.489 32739.382 - 32968.328: 99.4257% ( 7) 00:30:17.489 32968.328 - 33197.275: 99.4725% ( 7) 00:30:17.489 33197.275 - 33426.222: 99.5192% ( 7) 00:30:17.489 33426.222 - 33655.169: 99.5660% ( 7) 00:30:17.489 33655.169 - 33884.115: 99.5726% ( 1) 00:30:17.489 38005.156 - 38234.103: 99.6060% ( 5) 00:30:17.489 38234.103 - 38463.050: 99.6461% ( 6) 00:30:17.489 38463.050 - 38691.997: 99.6928% ( 7) 00:30:17.489 38691.997 - 38920.943: 99.7396% ( 7) 00:30:17.489 38920.943 - 39149.890: 99.7863% ( 7) 00:30:17.489 39149.890 - 39378.837: 99.8397% ( 8) 00:30:17.489 39378.837 - 39607.783: 99.8865% ( 7) 00:30:17.489 39607.783 - 39836.730: 99.9332% ( 7) 00:30:17.489 39836.730 - 40065.677: 99.9800% ( 7) 00:30:17.489 40065.677 - 40294.624: 100.0000% ( 3) 00:30:17.489 00:30:17.489 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:17.489 ============================================================================== 00:30:17.489 Range in us Cumulative IO count 00:30:17.489 7040.112 - 7068.730: 0.0267% ( 4) 00:30:17.489 7068.730 - 7097.348: 0.0668% ( 6) 00:30:17.489 7097.348 - 7125.967: 0.1135% ( 7) 00:30:17.489 7125.967 - 7154.585: 0.1803% ( 10) 00:30:17.489 7154.585 - 7183.203: 0.2671% ( 13) 00:30:17.489 7183.203 - 7211.822: 0.4073% ( 21) 00:30:17.489 7211.822 - 7240.440: 0.6210% ( 32) 00:30:17.489 7240.440 - 7269.059: 0.8747% ( 38) 00:30:17.489 7269.059 - 7297.677: 1.2420% ( 55) 00:30:17.489 7297.677 - 7326.295: 1.7294% ( 73) 00:30:17.489 7326.295 - 7383.532: 3.5524% ( 273) 00:30:17.489 7383.532 - 7440.769: 6.1231% ( 385) 00:30:17.489 7440.769 - 7498.005: 9.8558% ( 559) 00:30:17.489 7498.005 - 7555.242: 13.7954% ( 590) 00:30:17.489 7555.242 - 7612.479: 17.5214% ( 558) 00:30:17.489 7612.479 - 7669.715: 21.3809% ( 578) 00:30:17.489 7669.715 - 7726.952: 25.1669% ( 567) 00:30:17.489 7726.952 - 7784.189: 29.1600% ( 598) 00:30:17.489 7784.189 - 7841.425: 33.3734% ( 631) 00:30:17.489 7841.425 - 7898.662: 37.9741% ( 689) 00:30:17.489 7898.662 - 7955.899: 42.6616% ( 702) 00:30:17.489 7955.899 - 8013.135: 47.1822% ( 677) 00:30:17.489 8013.135 - 8070.372: 51.7829% ( 689) 00:30:17.489 8070.372 - 8127.609: 56.2700% ( 672) 00:30:17.489 8127.609 - 8184.845: 60.6704% ( 659) 00:30:17.489 8184.845 - 8242.082: 64.9239% ( 637) 00:30:17.489 8242.082 - 8299.319: 68.9637% ( 605) 00:30:17.489 8299.319 - 8356.555: 72.3357% ( 505) 00:30:17.489 8356.555 - 8413.792: 75.0935% ( 413) 00:30:17.489 8413.792 - 8471.029: 77.1902% ( 314) 00:30:17.489 8471.029 - 8528.266: 78.7994% ( 241) 00:30:17.489 8528.266 - 8585.502: 80.0347% ( 185) 00:30:17.489 8585.502 - 8642.739: 81.0430% ( 151) 00:30:17.489 8642.739 - 8699.976: 81.9845% ( 141) 00:30:17.489 8699.976 - 8757.212: 82.8392% ( 128) 00:30:17.489 8757.212 - 8814.449: 83.5337% ( 104) 00:30:17.489 8814.449 - 8871.686: 84.1146% ( 87) 00:30:17.489 8871.686 - 8928.922: 84.7089% ( 89) 00:30:17.489 8928.922 - 8986.159: 85.2431% ( 80) 00:30:17.489 8986.159 - 9043.396: 85.7572% ( 77) 00:30:17.489 9043.396 - 9100.632: 86.3248% ( 85) 00:30:17.489 9100.632 - 9157.869: 86.9391% ( 92) 00:30:17.489 9157.869 - 9215.106: 87.5067% ( 85) 00:30:17.489 9215.106 - 9272.342: 88.0809% ( 86) 00:30:17.489 9272.342 - 9329.579: 88.6151% ( 80) 00:30:17.489 9329.579 - 9386.816: 89.1493% ( 80) 00:30:17.489 9386.816 - 9444.052: 89.6902% ( 81) 00:30:17.489 9444.052 - 9501.289: 90.2110% ( 78) 00:30:17.489 9501.289 - 9558.526: 90.7452% ( 80) 00:30:17.489 9558.526 - 9615.762: 91.2927% ( 82) 00:30:17.489 9615.762 - 9672.999: 91.8269% ( 80) 00:30:17.489 9672.999 - 9730.236: 92.4012% ( 86) 00:30:17.489 9730.236 - 9787.472: 92.8953% ( 74) 00:30:17.489 9787.472 - 9844.709: 93.3761% ( 72) 00:30:17.489 9844.709 - 9901.946: 93.8168% ( 66) 00:30:17.489 9901.946 - 9959.183: 94.1707% ( 53) 00:30:17.489 9959.183 - 10016.419: 94.4578% ( 43) 00:30:17.489 10016.419 - 10073.656: 94.6514% ( 29) 00:30:17.489 10073.656 - 10130.893: 94.7783% ( 19) 00:30:17.489 10130.893 - 10188.129: 94.8651% ( 13) 00:30:17.489 10188.129 - 10245.366: 94.9452% ( 12) 00:30:17.489 10245.366 - 10302.603: 94.9987% ( 8) 00:30:17.489 10302.603 - 10359.839: 95.0521% ( 8) 00:30:17.489 10359.839 - 10417.076: 95.0988% ( 7) 00:30:17.489 10417.076 - 10474.313: 95.1522% ( 8) 00:30:17.489 10474.313 - 10531.549: 95.2057% ( 8) 00:30:17.489 10531.549 - 10588.786: 95.2591% ( 8) 00:30:17.489 10588.786 - 10646.023: 95.3192% ( 9) 00:30:17.489 10646.023 - 10703.259: 95.3659% ( 7) 00:30:17.489 10703.259 - 10760.496: 95.4260% ( 9) 00:30:17.489 10760.496 - 10817.733: 95.4728% ( 7) 00:30:17.489 10817.733 - 10874.969: 95.5061% ( 5) 00:30:17.489 10874.969 - 10932.206: 95.5395% ( 5) 00:30:17.489 10932.206 - 10989.443: 95.5729% ( 5) 00:30:17.489 10989.443 - 11046.679: 95.5996% ( 4) 00:30:17.489 11046.679 - 11103.916: 95.6330% ( 5) 00:30:17.489 11103.916 - 11161.153: 95.6864% ( 8) 00:30:17.489 11161.153 - 11218.390: 95.7465% ( 9) 00:30:17.489 11218.390 - 11275.626: 95.7933% ( 7) 00:30:17.489 11275.626 - 11332.863: 95.8400% ( 7) 00:30:17.489 11332.863 - 11390.100: 95.9135% ( 11) 00:30:17.489 11390.100 - 11447.336: 95.9736% ( 9) 00:30:17.489 11447.336 - 11504.573: 96.0337% ( 9) 00:30:17.489 11504.573 - 11561.810: 96.0938% ( 9) 00:30:17.489 11561.810 - 11619.046: 96.1405% ( 7) 00:30:17.489 11619.046 - 11676.283: 96.1806% ( 6) 00:30:17.489 11676.283 - 11733.520: 96.2340% ( 8) 00:30:17.489 11733.520 - 11790.756: 96.2941% ( 9) 00:30:17.489 11790.756 - 11847.993: 96.3608% ( 10) 00:30:17.489 11847.993 - 11905.230: 96.4209% ( 9) 00:30:17.489 11905.230 - 11962.466: 96.4944% ( 11) 00:30:17.489 11962.466 - 12019.703: 96.5545% ( 9) 00:30:17.489 12019.703 - 12076.940: 96.6146% ( 9) 00:30:17.489 12076.940 - 12134.176: 96.6613% ( 7) 00:30:17.489 12134.176 - 12191.413: 96.7014% ( 6) 00:30:17.489 12191.413 - 12248.650: 96.7415% ( 6) 00:30:17.489 12248.650 - 12305.886: 96.7815% ( 6) 00:30:17.489 12305.886 - 12363.123: 96.8416% ( 9) 00:30:17.489 12363.123 - 12420.360: 96.8884% ( 7) 00:30:17.489 12420.360 - 12477.597: 96.9284% ( 6) 00:30:17.489 12477.597 - 12534.833: 96.9752% ( 7) 00:30:17.489 12534.833 - 12592.070: 97.0286% ( 8) 00:30:17.489 12592.070 - 12649.307: 97.0686% ( 6) 00:30:17.489 12649.307 - 12706.543: 97.1154% ( 7) 00:30:17.489 12706.543 - 12763.780: 97.1621% ( 7) 00:30:17.489 12763.780 - 12821.017: 97.2022% ( 6) 00:30:17.489 12821.017 - 12878.253: 97.2356% ( 5) 00:30:17.489 12878.253 - 12935.490: 97.2623% ( 4) 00:30:17.489 12935.490 - 12992.727: 97.2823% ( 3) 00:30:17.489 12992.727 - 13049.963: 97.3024% ( 3) 00:30:17.489 13049.963 - 13107.200: 97.3291% ( 4) 00:30:17.489 13107.200 - 13164.437: 97.3558% ( 4) 00:30:17.489 13164.437 - 13221.673: 97.3958% ( 6) 00:30:17.489 13221.673 - 13278.910: 97.4426% ( 7) 00:30:17.489 13278.910 - 13336.147: 97.4893% ( 7) 00:30:17.489 13336.147 - 13393.383: 97.5227% ( 5) 00:30:17.489 13393.383 - 13450.620: 97.5494% ( 4) 00:30:17.489 13450.620 - 13507.857: 97.5895% ( 6) 00:30:17.489 13507.857 - 13565.093: 97.6295% ( 6) 00:30:17.489 13565.093 - 13622.330: 97.6629% ( 5) 00:30:17.489 13622.330 - 13679.567: 97.7297% ( 10) 00:30:17.489 13679.567 - 13736.803: 97.7698% ( 6) 00:30:17.489 13736.803 - 13794.040: 97.8032% ( 5) 00:30:17.489 13794.040 - 13851.277: 97.8365% ( 5) 00:30:17.489 13851.277 - 13908.514: 97.8699% ( 5) 00:30:17.489 13908.514 - 13965.750: 97.9167% ( 7) 00:30:17.489 13965.750 - 14022.987: 97.9701% ( 8) 00:30:17.489 14022.987 - 14080.224: 98.0369% ( 10) 00:30:17.489 14080.224 - 14137.460: 98.1303% ( 14) 00:30:17.489 14137.460 - 14194.697: 98.2038% ( 11) 00:30:17.489 14194.697 - 14251.934: 98.2839% ( 12) 00:30:17.489 14251.934 - 14309.170: 98.3640% ( 12) 00:30:17.489 14309.170 - 14366.407: 98.4241% ( 9) 00:30:17.489 14366.407 - 14423.644: 98.4842% ( 9) 00:30:17.489 14423.644 - 14480.880: 98.5443% ( 9) 00:30:17.489 14480.880 - 14538.117: 98.6044% ( 9) 00:30:17.489 14538.117 - 14595.354: 98.6579% ( 8) 00:30:17.489 14595.354 - 14652.590: 98.7313% ( 11) 00:30:17.489 14652.590 - 14767.064: 98.8381% ( 16) 00:30:17.489 14767.064 - 14881.537: 98.9583% ( 18) 00:30:17.489 14881.537 - 14996.010: 99.0585% ( 15) 00:30:17.490 14996.010 - 15110.484: 99.1386% ( 12) 00:30:17.490 15110.484 - 15224.957: 99.1453% ( 1) 00:30:17.490 29305.181 - 29534.128: 99.1854% ( 6) 00:30:17.490 29534.128 - 29763.074: 99.2321% ( 7) 00:30:17.490 29763.074 - 29992.021: 99.2788% ( 7) 00:30:17.490 29992.021 - 30220.968: 99.3256% ( 7) 00:30:17.490 30220.968 - 30449.914: 99.3723% ( 7) 00:30:17.490 30449.914 - 30678.861: 99.4191% ( 7) 00:30:17.490 30678.861 - 30907.808: 99.4658% ( 7) 00:30:17.490 30907.808 - 31136.755: 99.5126% ( 7) 00:30:17.490 31136.755 - 31365.701: 99.5660% ( 8) 00:30:17.490 31365.701 - 31594.648: 99.5726% ( 1) 00:30:17.490 35715.689 - 35944.636: 99.6060% ( 5) 00:30:17.490 35944.636 - 36173.583: 99.6595% ( 8) 00:30:17.490 36173.583 - 36402.529: 99.6995% ( 6) 00:30:17.490 36402.529 - 36631.476: 99.7529% ( 8) 00:30:17.490 36631.476 - 36860.423: 99.7997% ( 7) 00:30:17.490 36860.423 - 37089.369: 99.8464% ( 7) 00:30:17.490 37089.369 - 37318.316: 99.8998% ( 8) 00:30:17.490 37318.316 - 37547.263: 99.9466% ( 7) 00:30:17.490 37547.263 - 37776.210: 99.9933% ( 7) 00:30:17.490 37776.210 - 38005.156: 100.0000% ( 1) 00:30:17.490 00:30:17.490 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:17.490 ============================================================================== 00:30:17.490 Range in us Cumulative IO count 00:30:17.490 7011.493 - 7040.112: 0.0066% ( 1) 00:30:17.490 7040.112 - 7068.730: 0.0199% ( 2) 00:30:17.490 7068.730 - 7097.348: 0.0332% ( 2) 00:30:17.490 7097.348 - 7125.967: 0.0731% ( 6) 00:30:17.490 7125.967 - 7154.585: 0.1463% ( 11) 00:30:17.490 7154.585 - 7183.203: 0.2460% ( 15) 00:30:17.490 7183.203 - 7211.822: 0.4455% ( 30) 00:30:17.490 7211.822 - 7240.440: 0.6316% ( 28) 00:30:17.490 7240.440 - 7269.059: 0.8777% ( 37) 00:30:17.490 7269.059 - 7297.677: 1.1702% ( 44) 00:30:17.490 7297.677 - 7326.295: 1.7553% ( 88) 00:30:17.490 7326.295 - 7383.532: 3.5638% ( 272) 00:30:17.490 7383.532 - 7440.769: 6.2633% ( 406) 00:30:17.490 7440.769 - 7498.005: 9.4348% ( 477) 00:30:17.490 7498.005 - 7555.242: 13.2513% ( 574) 00:30:17.490 7555.242 - 7612.479: 17.1277% ( 583) 00:30:17.490 7612.479 - 7669.715: 20.9043% ( 568) 00:30:17.490 7669.715 - 7726.952: 24.7939% ( 585) 00:30:17.490 7726.952 - 7784.189: 28.7965% ( 602) 00:30:17.490 7784.189 - 7841.425: 33.3112% ( 679) 00:30:17.490 7841.425 - 7898.662: 37.9455% ( 697) 00:30:17.490 7898.662 - 7955.899: 42.5199% ( 688) 00:30:17.490 7955.899 - 8013.135: 47.1077% ( 690) 00:30:17.490 8013.135 - 8070.372: 51.7420% ( 697) 00:30:17.490 8070.372 - 8127.609: 56.2500% ( 678) 00:30:17.490 8127.609 - 8184.845: 60.8112% ( 686) 00:30:17.490 8184.845 - 8242.082: 65.0931% ( 644) 00:30:17.490 8242.082 - 8299.319: 69.0691% ( 598) 00:30:17.490 8299.319 - 8356.555: 72.4069% ( 502) 00:30:17.490 8356.555 - 8413.792: 75.1197% ( 408) 00:30:17.490 8413.792 - 8471.029: 77.1210% ( 301) 00:30:17.490 8471.029 - 8528.266: 78.5771% ( 219) 00:30:17.490 8528.266 - 8585.502: 79.8072% ( 185) 00:30:17.490 8585.502 - 8642.739: 80.8245% ( 153) 00:30:17.490 8642.739 - 8699.976: 81.8019% ( 147) 00:30:17.490 8699.976 - 8757.212: 82.6729% ( 131) 00:30:17.490 8757.212 - 8814.449: 83.4242% ( 113) 00:30:17.490 8814.449 - 8871.686: 83.9960% ( 86) 00:30:17.490 8871.686 - 8928.922: 84.5545% ( 84) 00:30:17.490 8928.922 - 8986.159: 85.0266% ( 71) 00:30:17.490 8986.159 - 9043.396: 85.5718% ( 82) 00:30:17.490 9043.396 - 9100.632: 86.0638% ( 74) 00:30:17.490 9100.632 - 9157.869: 86.6556% ( 89) 00:30:17.490 9157.869 - 9215.106: 87.2473% ( 89) 00:30:17.490 9215.106 - 9272.342: 87.7527% ( 76) 00:30:17.490 9272.342 - 9329.579: 88.2713% ( 78) 00:30:17.490 9329.579 - 9386.816: 88.7766% ( 76) 00:30:17.490 9386.816 - 9444.052: 89.3152% ( 81) 00:30:17.490 9444.052 - 9501.289: 89.8604% ( 82) 00:30:17.490 9501.289 - 9558.526: 90.3923% ( 80) 00:30:17.490 9558.526 - 9615.762: 90.9441% ( 83) 00:30:17.490 9615.762 - 9672.999: 91.5160% ( 86) 00:30:17.490 9672.999 - 9730.236: 92.0811% ( 85) 00:30:17.490 9730.236 - 9787.472: 92.6130% ( 80) 00:30:17.490 9787.472 - 9844.709: 93.1184% ( 76) 00:30:17.490 9844.709 - 9901.946: 93.4774% ( 54) 00:30:17.490 9901.946 - 9959.183: 93.7965% ( 48) 00:30:17.490 9959.183 - 10016.419: 94.0957% ( 45) 00:30:17.490 10016.419 - 10073.656: 94.2886% ( 29) 00:30:17.490 10073.656 - 10130.893: 94.4215% ( 20) 00:30:17.490 10130.893 - 10188.129: 94.5146% ( 14) 00:30:17.490 10188.129 - 10245.366: 94.6011% ( 13) 00:30:17.490 10245.366 - 10302.603: 94.6875% ( 13) 00:30:17.490 10302.603 - 10359.839: 94.7872% ( 15) 00:30:17.490 10359.839 - 10417.076: 94.8670% ( 12) 00:30:17.490 10417.076 - 10474.313: 94.9535% ( 13) 00:30:17.490 10474.313 - 10531.549: 95.0465% ( 14) 00:30:17.490 10531.549 - 10588.786: 95.1529% ( 16) 00:30:17.490 10588.786 - 10646.023: 95.2460% ( 14) 00:30:17.490 10646.023 - 10703.259: 95.2926% ( 7) 00:30:17.490 10703.259 - 10760.496: 95.3457% ( 8) 00:30:17.490 10760.496 - 10817.733: 95.3790% ( 5) 00:30:17.490 10817.733 - 10874.969: 95.4122% ( 5) 00:30:17.490 10874.969 - 10932.206: 95.4255% ( 2) 00:30:17.490 10932.206 - 10989.443: 95.4588% ( 5) 00:30:17.490 10989.443 - 11046.679: 95.4987% ( 6) 00:30:17.490 11046.679 - 11103.916: 95.5253% ( 4) 00:30:17.490 11103.916 - 11161.153: 95.5652% ( 6) 00:30:17.490 11161.153 - 11218.390: 95.5918% ( 4) 00:30:17.490 11218.390 - 11275.626: 95.6250% ( 5) 00:30:17.490 11275.626 - 11332.863: 95.6582% ( 5) 00:30:17.490 11332.863 - 11390.100: 95.7181% ( 9) 00:30:17.490 11390.100 - 11447.336: 95.7713% ( 8) 00:30:17.490 11447.336 - 11504.573: 95.8311% ( 9) 00:30:17.490 11504.573 - 11561.810: 95.8843% ( 8) 00:30:17.490 11561.810 - 11619.046: 95.9441% ( 9) 00:30:17.490 11619.046 - 11676.283: 95.9973% ( 8) 00:30:17.490 11676.283 - 11733.520: 96.0838% ( 13) 00:30:17.490 11733.520 - 11790.756: 96.1569% ( 11) 00:30:17.490 11790.756 - 11847.993: 96.2500% ( 14) 00:30:17.490 11847.993 - 11905.230: 96.3298% ( 12) 00:30:17.490 11905.230 - 11962.466: 96.4096% ( 12) 00:30:17.490 11962.466 - 12019.703: 96.4827% ( 11) 00:30:17.490 12019.703 - 12076.940: 96.5691% ( 13) 00:30:17.490 12076.940 - 12134.176: 96.6489% ( 12) 00:30:17.490 12134.176 - 12191.413: 96.7154% ( 10) 00:30:17.490 12191.413 - 12248.650: 96.7553% ( 6) 00:30:17.490 12248.650 - 12305.886: 96.8085% ( 8) 00:30:17.490 12305.886 - 12363.123: 96.8551% ( 7) 00:30:17.490 12363.123 - 12420.360: 96.8949% ( 6) 00:30:17.490 12420.360 - 12477.597: 96.9348% ( 6) 00:30:17.490 12477.597 - 12534.833: 96.9614% ( 4) 00:30:17.490 12534.833 - 12592.070: 96.9814% ( 3) 00:30:17.490 12592.070 - 12649.307: 97.0013% ( 3) 00:30:17.490 12649.307 - 12706.543: 97.0279% ( 4) 00:30:17.490 12706.543 - 12763.780: 97.0479% ( 3) 00:30:17.490 12763.780 - 12821.017: 97.0811% ( 5) 00:30:17.490 12821.017 - 12878.253: 97.1144% ( 5) 00:30:17.490 12878.253 - 12935.490: 97.1609% ( 7) 00:30:17.490 12935.490 - 12992.727: 97.2074% ( 7) 00:30:17.490 12992.727 - 13049.963: 97.2540% ( 7) 00:30:17.490 13049.963 - 13107.200: 97.3005% ( 7) 00:30:17.490 13107.200 - 13164.437: 97.3404% ( 6) 00:30:17.490 13164.437 - 13221.673: 97.3936% ( 8) 00:30:17.490 13221.673 - 13278.910: 97.4335% ( 6) 00:30:17.490 13278.910 - 13336.147: 97.4801% ( 7) 00:30:17.490 13336.147 - 13393.383: 97.5266% ( 7) 00:30:17.490 13393.383 - 13450.620: 97.5665% ( 6) 00:30:17.490 13450.620 - 13507.857: 97.6130% ( 7) 00:30:17.490 13507.857 - 13565.093: 97.6729% ( 9) 00:30:17.490 13565.093 - 13622.330: 97.7394% ( 10) 00:30:17.490 13622.330 - 13679.567: 97.7992% ( 9) 00:30:17.490 13679.567 - 13736.803: 97.8590% ( 9) 00:30:17.490 13736.803 - 13794.040: 97.9189% ( 9) 00:30:17.490 13794.040 - 13851.277: 97.9854% ( 10) 00:30:17.490 13851.277 - 13908.514: 98.0053% ( 3) 00:30:17.490 13908.514 - 13965.750: 98.0253% ( 3) 00:30:17.490 13965.750 - 14022.987: 98.0652% ( 6) 00:30:17.490 14022.987 - 14080.224: 98.0984% ( 5) 00:30:17.490 14080.224 - 14137.460: 98.1516% ( 8) 00:30:17.490 14137.460 - 14194.697: 98.1915% ( 6) 00:30:17.490 14194.697 - 14251.934: 98.2314% ( 6) 00:30:17.490 14251.934 - 14309.170: 98.2580% ( 4) 00:30:17.490 14309.170 - 14366.407: 98.2912% ( 5) 00:30:17.490 14366.407 - 14423.644: 98.3311% ( 6) 00:30:17.490 14423.644 - 14480.880: 98.3710% ( 6) 00:30:17.490 14480.880 - 14538.117: 98.4309% ( 9) 00:30:17.490 14538.117 - 14595.354: 98.4907% ( 9) 00:30:17.490 14595.354 - 14652.590: 98.5505% ( 9) 00:30:17.490 14652.590 - 14767.064: 98.6303% ( 12) 00:30:17.490 14767.064 - 14881.537: 98.6968% ( 10) 00:30:17.490 14881.537 - 14996.010: 98.7633% ( 10) 00:30:17.490 14996.010 - 15110.484: 98.8364% ( 11) 00:30:17.490 15110.484 - 15224.957: 98.8963% ( 9) 00:30:17.490 15224.957 - 15339.431: 98.9694% ( 11) 00:30:17.490 15339.431 - 15453.904: 99.0426% ( 11) 00:30:17.490 15453.904 - 15568.377: 99.1024% ( 9) 00:30:17.490 15568.377 - 15682.851: 99.1223% ( 3) 00:30:17.490 15682.851 - 15797.324: 99.1489% ( 4) 00:30:17.490 21864.412 - 21978.886: 99.1556% ( 1) 00:30:17.490 21978.886 - 22093.359: 99.1755% ( 3) 00:30:17.490 22093.359 - 22207.832: 99.2021% ( 4) 00:30:17.490 22207.832 - 22322.306: 99.2221% ( 3) 00:30:17.490 22322.306 - 22436.779: 99.2487% ( 4) 00:30:17.490 22436.779 - 22551.252: 99.2753% ( 4) 00:30:17.490 22551.252 - 22665.726: 99.3019% ( 4) 00:30:17.490 22665.726 - 22780.199: 99.3285% ( 4) 00:30:17.490 22780.199 - 22894.672: 99.3484% ( 3) 00:30:17.490 22894.672 - 23009.146: 99.3750% ( 4) 00:30:17.490 23009.146 - 23123.619: 99.3949% ( 3) 00:30:17.490 23123.619 - 23238.093: 99.4215% ( 4) 00:30:17.490 23238.093 - 23352.566: 99.4415% ( 3) 00:30:17.491 23352.566 - 23467.039: 99.4681% ( 4) 00:30:17.491 23467.039 - 23581.513: 99.5013% ( 5) 00:30:17.491 23581.513 - 23695.986: 99.5213% ( 3) 00:30:17.491 23695.986 - 23810.459: 99.5412% ( 3) 00:30:17.491 23810.459 - 23924.933: 99.5678% ( 4) 00:30:17.491 23924.933 - 24039.406: 99.5745% ( 1) 00:30:17.491 28503.867 - 28618.341: 99.5878% ( 2) 00:30:17.491 28618.341 - 28732.814: 99.6077% ( 3) 00:30:17.491 28732.814 - 28847.287: 99.6343% ( 4) 00:30:17.491 28847.287 - 28961.761: 99.6543% ( 3) 00:30:17.491 28961.761 - 29076.234: 99.6742% ( 3) 00:30:17.491 29076.234 - 29190.707: 99.7008% ( 4) 00:30:17.491 29190.707 - 29305.181: 99.7274% ( 4) 00:30:17.491 29305.181 - 29534.128: 99.7739% ( 7) 00:30:17.491 29534.128 - 29763.074: 99.8271% ( 8) 00:30:17.491 29763.074 - 29992.021: 99.8803% ( 8) 00:30:17.491 29992.021 - 30220.968: 99.9269% ( 7) 00:30:17.491 30220.968 - 30449.914: 99.9801% ( 8) 00:30:17.491 30449.914 - 30678.861: 100.0000% ( 3) 00:30:17.491 00:30:17.491 05:41:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:18.883 Initializing NVMe Controllers 00:30:18.883 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:30:18.883 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:30:18.883 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:30:18.883 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:30:18.883 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:30:18.883 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:30:18.883 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:30:18.883 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:30:18.883 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:30:18.883 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:30:18.883 Initialization complete. Launching workers. 00:30:18.883 ======================================================== 00:30:18.883 Latency(us) 00:30:18.883 Device Information : IOPS MiB/s Average min max 00:30:18.883 PCIE (0000:00:10.0) NSID 1 from core 0: 7221.99 84.63 17789.01 10761.36 51439.12 00:30:18.883 PCIE (0000:00:11.0) NSID 1 from core 0: 7221.99 84.63 17760.09 10946.54 49544.45 00:30:18.883 PCIE (0000:00:13.0) NSID 1 from core 0: 7221.99 84.63 17723.99 10907.75 48661.61 00:30:18.883 PCIE (0000:00:12.0) NSID 1 from core 0: 7221.99 84.63 17689.17 10939.39 46577.96 00:30:18.883 PCIE (0000:00:12.0) NSID 2 from core 0: 7221.99 84.63 17654.45 10867.73 45914.19 00:30:18.883 PCIE (0000:00:12.0) NSID 3 from core 0: 7285.90 85.38 17465.30 10765.41 32812.31 00:30:18.883 ======================================================== 00:30:18.883 Total : 43395.85 508.55 17680.02 10761.36 51439.12 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11275.626us 00:30:18.883 10.00000% : 12363.123us 00:30:18.883 25.00000% : 13908.514us 00:30:18.883 50.00000% : 16827.584us 00:30:18.883 75.00000% : 21520.992us 00:30:18.883 90.00000% : 23009.146us 00:30:18.883 95.00000% : 24039.406us 00:30:18.883 98.00000% : 24955.193us 00:30:18.883 99.00000% : 41439.357us 00:30:18.883 99.50000% : 50139.333us 00:30:18.883 99.90000% : 51284.066us 00:30:18.883 99.99000% : 51513.013us 00:30:18.883 99.99900% : 51513.013us 00:30:18.883 99.99990% : 51513.013us 00:30:18.883 99.99999% : 51513.013us 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11447.336us 00:30:18.883 10.00000% : 12248.650us 00:30:18.883 25.00000% : 13965.750us 00:30:18.883 50.00000% : 16598.638us 00:30:18.883 75.00000% : 21978.886us 00:30:18.883 90.00000% : 22780.199us 00:30:18.883 95.00000% : 23238.093us 00:30:18.883 98.00000% : 24382.826us 00:30:18.883 99.00000% : 39149.890us 00:30:18.883 99.50000% : 48536.706us 00:30:18.883 99.90000% : 49452.493us 00:30:18.883 99.99000% : 49681.439us 00:30:18.883 99.99900% : 49681.439us 00:30:18.883 99.99990% : 49681.439us 00:30:18.883 99.99999% : 49681.439us 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11447.336us 00:30:18.883 10.00000% : 12191.413us 00:30:18.883 25.00000% : 13965.750us 00:30:18.883 50.00000% : 16484.164us 00:30:18.883 75.00000% : 21978.886us 00:30:18.883 90.00000% : 22665.726us 00:30:18.883 95.00000% : 23352.566us 00:30:18.883 98.00000% : 24268.353us 00:30:18.883 99.00000% : 38005.156us 00:30:18.883 99.50000% : 47620.919us 00:30:18.883 99.90000% : 48536.706us 00:30:18.883 99.99000% : 48765.652us 00:30:18.883 99.99900% : 48765.652us 00:30:18.883 99.99990% : 48765.652us 00:30:18.883 99.99999% : 48765.652us 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11332.863us 00:30:18.883 10.00000% : 12191.413us 00:30:18.883 25.00000% : 13851.277us 00:30:18.883 50.00000% : 16484.164us 00:30:18.883 75.00000% : 21978.886us 00:30:18.883 90.00000% : 22780.199us 00:30:18.883 95.00000% : 23352.566us 00:30:18.883 98.00000% : 24153.879us 00:30:18.883 99.00000% : 35715.689us 00:30:18.883 99.50000% : 45560.398us 00:30:18.883 99.90000% : 46476.185us 00:30:18.883 99.99000% : 46705.132us 00:30:18.883 99.99900% : 46705.132us 00:30:18.883 99.99990% : 46705.132us 00:30:18.883 99.99999% : 46705.132us 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11390.100us 00:30:18.883 10.00000% : 12191.413us 00:30:18.883 25.00000% : 14080.224us 00:30:18.883 50.00000% : 16713.111us 00:30:18.883 75.00000% : 21978.886us 00:30:18.883 90.00000% : 22780.199us 00:30:18.883 95.00000% : 23238.093us 00:30:18.883 98.00000% : 24153.879us 00:30:18.883 99.00000% : 33426.222us 00:30:18.883 99.50000% : 44186.718us 00:30:18.883 99.90000% : 45789.345us 00:30:18.883 99.99000% : 46018.292us 00:30:18.883 99.99900% : 46018.292us 00:30:18.883 99.99990% : 46018.292us 00:30:18.883 99.99999% : 46018.292us 00:30:18.883 00:30:18.883 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:18.883 ================================================================================= 00:30:18.883 1.00000% : 11275.626us 00:30:18.883 10.00000% : 12248.650us 00:30:18.883 25.00000% : 13851.277us 00:30:18.883 50.00000% : 16827.584us 00:30:18.883 75.00000% : 21978.886us 00:30:18.883 90.00000% : 22665.726us 00:30:18.883 95.00000% : 23123.619us 00:30:18.883 98.00000% : 23810.459us 00:30:18.883 99.00000% : 24611.773us 00:30:18.883 99.50000% : 31594.648us 00:30:18.883 99.90000% : 32739.382us 00:30:18.883 99.99000% : 32968.328us 00:30:18.883 99.99900% : 32968.328us 00:30:18.883 99.99990% : 32968.328us 00:30:18.883 99.99999% : 32968.328us 00:30:18.883 00:30:18.883 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:30:18.883 ============================================================================== 00:30:18.883 Range in us Cumulative IO count 00:30:18.883 10760.496 - 10817.733: 0.0691% ( 5) 00:30:18.883 10817.733 - 10874.969: 0.1659% ( 7) 00:30:18.883 10874.969 - 10932.206: 0.2074% ( 3) 00:30:18.883 10932.206 - 10989.443: 0.2904% ( 6) 00:30:18.883 10989.443 - 11046.679: 0.4287% ( 10) 00:30:18.883 11046.679 - 11103.916: 0.5808% ( 11) 00:30:18.883 11103.916 - 11161.153: 0.7329% ( 11) 00:30:18.883 11161.153 - 11218.390: 0.9403% ( 15) 00:30:18.883 11218.390 - 11275.626: 1.1753% ( 17) 00:30:18.883 11275.626 - 11332.863: 1.4519% ( 20) 00:30:18.883 11332.863 - 11390.100: 1.8114% ( 26) 00:30:18.883 11390.100 - 11447.336: 2.3092% ( 36) 00:30:18.883 11447.336 - 11504.573: 2.7378% ( 31) 00:30:18.883 11504.573 - 11561.810: 3.2218% ( 35) 00:30:18.883 11561.810 - 11619.046: 3.6366% ( 30) 00:30:18.883 11619.046 - 11676.283: 4.1206% ( 35) 00:30:18.883 11676.283 - 11733.520: 4.7566% ( 46) 00:30:18.883 11733.520 - 11790.756: 5.2406% ( 35) 00:30:18.883 11790.756 - 11847.993: 5.7937% ( 40) 00:30:18.883 11847.993 - 11905.230: 6.4298% ( 46) 00:30:18.883 11905.230 - 11962.466: 6.9967% ( 41) 00:30:18.883 11962.466 - 12019.703: 7.4530% ( 33) 00:30:18.883 12019.703 - 12076.940: 7.8955% ( 32) 00:30:18.883 12076.940 - 12134.176: 8.4209% ( 38) 00:30:18.883 12134.176 - 12191.413: 8.8772% ( 33) 00:30:18.883 12191.413 - 12248.650: 9.3059% ( 31) 00:30:18.883 12248.650 - 12305.886: 9.8175% ( 37) 00:30:18.883 12305.886 - 12363.123: 10.6056% ( 57) 00:30:18.883 12363.123 - 12420.360: 11.3247% ( 52) 00:30:18.883 12420.360 - 12477.597: 11.9884% ( 48) 00:30:18.883 12477.597 - 12534.833: 12.4170% ( 31) 00:30:18.883 12534.833 - 12592.070: 12.8595% ( 32) 00:30:18.883 12592.070 - 12649.307: 13.4403% ( 42) 00:30:18.883 12649.307 - 12706.543: 13.9657% ( 38) 00:30:18.883 12706.543 - 12763.780: 14.6294% ( 48) 00:30:18.883 12763.780 - 12821.017: 15.3346% ( 51) 00:30:18.883 12821.017 - 12878.253: 15.9983% ( 48) 00:30:18.883 12878.253 - 12935.490: 16.5653% ( 41) 00:30:18.883 12935.490 - 12992.727: 17.2152% ( 47) 00:30:18.883 12992.727 - 13049.963: 17.8512% ( 46) 00:30:18.883 13049.963 - 13107.200: 18.5288% ( 49) 00:30:18.883 13107.200 - 13164.437: 19.2063% ( 49) 00:30:18.883 13164.437 - 13221.673: 19.8977% ( 50) 00:30:18.883 13221.673 - 13278.910: 20.5199% ( 45) 00:30:18.883 13278.910 - 13336.147: 20.9486% ( 31) 00:30:18.883 13336.147 - 13393.383: 21.3910% ( 32) 00:30:18.883 13393.383 - 13450.620: 21.7920% ( 29) 00:30:18.883 13450.620 - 13507.857: 22.2760% ( 35) 00:30:18.883 13507.857 - 13565.093: 22.6355% ( 26) 00:30:18.883 13565.093 - 13622.330: 23.0365% ( 29) 00:30:18.883 13622.330 - 13679.567: 23.4790% ( 32) 00:30:18.883 13679.567 - 13736.803: 23.9076% ( 31) 00:30:18.883 13736.803 - 13794.040: 24.3639% ( 33) 00:30:18.883 13794.040 - 13851.277: 24.8064% ( 32) 00:30:18.883 13851.277 - 13908.514: 25.2765% ( 34) 00:30:18.883 13908.514 - 13965.750: 25.7190% ( 32) 00:30:18.883 13965.750 - 14022.987: 26.3136% ( 43) 00:30:18.883 14022.987 - 14080.224: 26.8252% ( 37) 00:30:18.883 14080.224 - 14137.460: 27.2539% ( 31) 00:30:18.883 14137.460 - 14194.697: 27.7240% ( 34) 00:30:18.883 14194.697 - 14251.934: 28.6090% ( 64) 00:30:18.883 14251.934 - 14309.170: 29.1344% ( 38) 00:30:18.883 14309.170 - 14366.407: 29.7152% ( 42) 00:30:18.883 14366.407 - 14423.644: 30.1715% ( 33) 00:30:18.883 14423.644 - 14480.880: 30.7246% ( 40) 00:30:18.883 14480.880 - 14538.117: 31.2638% ( 39) 00:30:18.883 14538.117 - 14595.354: 31.7893% ( 38) 00:30:18.883 14595.354 - 14652.590: 32.4253% ( 46) 00:30:18.883 14652.590 - 14767.064: 33.6836% ( 91) 00:30:18.883 14767.064 - 14881.537: 35.0111% ( 96) 00:30:18.883 14881.537 - 14996.010: 36.1311% ( 81) 00:30:18.883 14996.010 - 15110.484: 37.0852% ( 69) 00:30:18.883 15110.484 - 15224.957: 38.0808% ( 72) 00:30:18.883 15224.957 - 15339.431: 39.4635% ( 100) 00:30:18.883 15339.431 - 15453.904: 40.9292% ( 106) 00:30:18.883 15453.904 - 15568.377: 42.1598% ( 89) 00:30:18.883 15568.377 - 15682.851: 43.2660% ( 80) 00:30:18.883 15682.851 - 15797.324: 44.2201% ( 69) 00:30:18.883 15797.324 - 15911.797: 45.0774% ( 62) 00:30:18.883 15911.797 - 16026.271: 45.8794% ( 58) 00:30:18.883 16026.271 - 16140.744: 46.8750% ( 72) 00:30:18.883 16140.744 - 16255.217: 47.7600% ( 64) 00:30:18.883 16255.217 - 16369.691: 48.3131% ( 40) 00:30:18.883 16369.691 - 16484.164: 48.8662% ( 40) 00:30:18.883 16484.164 - 16598.638: 49.4469% ( 42) 00:30:18.883 16598.638 - 16713.111: 49.9170% ( 34) 00:30:18.883 16713.111 - 16827.584: 50.5116% ( 43) 00:30:18.883 16827.584 - 16942.058: 51.0785% ( 41) 00:30:18.883 16942.058 - 17056.531: 51.6040% ( 38) 00:30:18.883 17056.531 - 17171.004: 52.0326% ( 31) 00:30:18.883 17171.004 - 17285.478: 52.6272% ( 43) 00:30:18.883 17285.478 - 17399.951: 53.4845% ( 62) 00:30:18.883 17399.951 - 17514.424: 54.7013% ( 88) 00:30:18.883 17514.424 - 17628.898: 55.5725% ( 63) 00:30:18.883 17628.898 - 17743.371: 56.4021% ( 60) 00:30:18.883 17743.371 - 17857.845: 57.1073% ( 51) 00:30:18.883 17857.845 - 17972.318: 57.7295% ( 45) 00:30:18.883 17972.318 - 18086.791: 58.2550% ( 38) 00:30:18.883 18086.791 - 18201.265: 58.7528% ( 36) 00:30:18.883 18201.265 - 18315.738: 59.4165% ( 48) 00:30:18.883 18315.738 - 18430.211: 59.9143% ( 36) 00:30:18.884 18430.211 - 18544.685: 60.6195% ( 51) 00:30:18.884 18544.685 - 18659.158: 61.3938% ( 56) 00:30:18.884 18659.158 - 18773.631: 62.0713% ( 49) 00:30:18.884 18773.631 - 18888.105: 62.7212% ( 47) 00:30:18.884 18888.105 - 19002.578: 63.3020% ( 42) 00:30:18.884 19002.578 - 19117.052: 63.7860% ( 35) 00:30:18.884 19117.052 - 19231.525: 64.1593% ( 27) 00:30:18.884 19231.525 - 19345.998: 64.6294% ( 34) 00:30:18.884 19345.998 - 19460.472: 64.9613% ( 24) 00:30:18.884 19460.472 - 19574.945: 65.2655% ( 22) 00:30:18.884 19574.945 - 19689.418: 65.5144% ( 18) 00:30:18.884 19689.418 - 19803.892: 65.8324% ( 23) 00:30:18.884 19803.892 - 19918.365: 66.0675% ( 17) 00:30:18.884 19918.365 - 20032.838: 66.4408% ( 27) 00:30:18.884 20032.838 - 20147.312: 66.9386% ( 36) 00:30:18.884 20147.312 - 20261.785: 67.2705% ( 24) 00:30:18.884 20261.785 - 20376.259: 67.4640% ( 14) 00:30:18.884 20376.259 - 20490.732: 67.8650% ( 29) 00:30:18.884 20490.732 - 20605.205: 68.1831% ( 23) 00:30:18.884 20605.205 - 20719.679: 68.5426% ( 26) 00:30:18.884 20719.679 - 20834.152: 68.8883% ( 25) 00:30:18.884 20834.152 - 20948.625: 69.4690% ( 42) 00:30:18.884 20948.625 - 21063.099: 70.6444% ( 85) 00:30:18.884 21063.099 - 21177.572: 71.8473% ( 87) 00:30:18.884 21177.572 - 21292.045: 73.1471% ( 94) 00:30:18.884 21292.045 - 21406.519: 74.2533% ( 80) 00:30:18.884 21406.519 - 21520.992: 75.2765% ( 74) 00:30:18.884 21520.992 - 21635.466: 76.6455% ( 99) 00:30:18.884 21635.466 - 21749.939: 78.1527% ( 109) 00:30:18.884 21749.939 - 21864.412: 79.5492% ( 101) 00:30:18.884 21864.412 - 21978.886: 80.9735% ( 103) 00:30:18.884 21978.886 - 22093.359: 82.2871% ( 95) 00:30:18.884 22093.359 - 22207.832: 83.6283% ( 97) 00:30:18.884 22207.832 - 22322.306: 85.0525% ( 103) 00:30:18.884 22322.306 - 22436.779: 86.2970% ( 90) 00:30:18.884 22436.779 - 22551.252: 87.4585% ( 84) 00:30:18.884 22551.252 - 22665.726: 88.1499% ( 50) 00:30:18.884 22665.726 - 22780.199: 89.0072% ( 62) 00:30:18.884 22780.199 - 22894.672: 89.5603% ( 40) 00:30:18.884 22894.672 - 23009.146: 90.3485% ( 57) 00:30:18.884 23009.146 - 23123.619: 90.8186% ( 34) 00:30:18.884 23123.619 - 23238.093: 91.2749% ( 33) 00:30:18.884 23238.093 - 23352.566: 91.7035% ( 31) 00:30:18.884 23352.566 - 23467.039: 92.1322% ( 31) 00:30:18.884 23467.039 - 23581.513: 92.5747% ( 32) 00:30:18.884 23581.513 - 23695.986: 93.1139% ( 39) 00:30:18.884 23695.986 - 23810.459: 93.7362% ( 45) 00:30:18.884 23810.459 - 23924.933: 94.3861% ( 47) 00:30:18.884 23924.933 - 24039.406: 95.0774% ( 50) 00:30:18.884 24039.406 - 24153.879: 95.7965% ( 52) 00:30:18.884 24153.879 - 24268.353: 96.3219% ( 38) 00:30:18.884 24268.353 - 24382.826: 96.9441% ( 45) 00:30:18.884 24382.826 - 24497.300: 97.2622% ( 23) 00:30:18.884 24497.300 - 24611.773: 97.5802% ( 23) 00:30:18.884 24611.773 - 24726.246: 97.7738% ( 14) 00:30:18.884 24726.246 - 24840.720: 97.8982% ( 9) 00:30:18.884 24840.720 - 24955.193: 98.0365% ( 10) 00:30:18.884 24955.193 - 25069.666: 98.0918% ( 4) 00:30:18.884 25069.666 - 25184.140: 98.1886% ( 7) 00:30:18.884 25184.140 - 25298.613: 98.2163% ( 2) 00:30:18.884 25298.613 - 25413.086: 98.2301% ( 1) 00:30:18.884 39149.890 - 39378.837: 98.2439% ( 1) 00:30:18.884 39607.783 - 39836.730: 98.4237% ( 13) 00:30:18.884 39836.730 - 40065.677: 98.5758% ( 11) 00:30:18.884 40065.677 - 40294.624: 98.6726% ( 7) 00:30:18.884 40294.624 - 40523.570: 98.7417% ( 5) 00:30:18.884 40523.570 - 40752.517: 98.8108% ( 5) 00:30:18.884 40752.517 - 40981.464: 98.8662% ( 4) 00:30:18.884 40981.464 - 41210.410: 98.9629% ( 7) 00:30:18.884 41210.410 - 41439.357: 99.0597% ( 7) 00:30:18.884 41439.357 - 41668.304: 99.1150% ( 4) 00:30:18.884 48994.599 - 49223.546: 99.1704% ( 4) 00:30:18.884 49223.546 - 49452.493: 99.2533% ( 6) 00:30:18.884 49452.493 - 49681.439: 99.3363% ( 6) 00:30:18.884 49681.439 - 49910.386: 99.4192% ( 6) 00:30:18.884 49910.386 - 50139.333: 99.5022% ( 6) 00:30:18.884 50139.333 - 50368.279: 99.5852% ( 6) 00:30:18.884 50368.279 - 50597.226: 99.6820% ( 7) 00:30:18.884 50597.226 - 50826.173: 99.7649% ( 6) 00:30:18.884 50826.173 - 51055.120: 99.8479% ( 6) 00:30:18.884 51055.120 - 51284.066: 99.9447% ( 7) 00:30:18.884 51284.066 - 51513.013: 100.0000% ( 4) 00:30:18.884 00:30:18.884 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:30:18.884 ============================================================================== 00:30:18.884 Range in us Cumulative IO count 00:30:18.884 10932.206 - 10989.443: 0.0138% ( 1) 00:30:18.884 10989.443 - 11046.679: 0.0415% ( 2) 00:30:18.884 11046.679 - 11103.916: 0.1106% ( 5) 00:30:18.884 11103.916 - 11161.153: 0.1521% ( 3) 00:30:18.884 11161.153 - 11218.390: 0.2212% ( 5) 00:30:18.884 11218.390 - 11275.626: 0.3595% ( 10) 00:30:18.884 11275.626 - 11332.863: 0.4978% ( 10) 00:30:18.884 11332.863 - 11390.100: 0.8435% ( 25) 00:30:18.884 11390.100 - 11447.336: 1.2998% ( 33) 00:30:18.884 11447.336 - 11504.573: 1.7423% ( 32) 00:30:18.884 11504.573 - 11561.810: 2.3230% ( 42) 00:30:18.884 11561.810 - 11619.046: 3.0835% ( 55) 00:30:18.884 11619.046 - 11676.283: 3.6366% ( 40) 00:30:18.884 11676.283 - 11733.520: 4.4110% ( 56) 00:30:18.884 11733.520 - 11790.756: 5.0747% ( 48) 00:30:18.884 11790.756 - 11847.993: 5.7246% ( 47) 00:30:18.884 11847.993 - 11905.230: 6.3468% ( 45) 00:30:18.884 11905.230 - 11962.466: 6.9967% ( 47) 00:30:18.884 11962.466 - 12019.703: 7.6881% ( 50) 00:30:18.884 12019.703 - 12076.940: 8.2688% ( 42) 00:30:18.884 12076.940 - 12134.176: 8.9325% ( 48) 00:30:18.884 12134.176 - 12191.413: 9.7760% ( 61) 00:30:18.884 12191.413 - 12248.650: 10.3706% ( 43) 00:30:18.884 12248.650 - 12305.886: 10.8960% ( 38) 00:30:18.884 12305.886 - 12363.123: 11.3523% ( 33) 00:30:18.884 12363.123 - 12420.360: 11.9054% ( 40) 00:30:18.884 12420.360 - 12477.597: 12.4585% ( 40) 00:30:18.884 12477.597 - 12534.833: 12.9701% ( 37) 00:30:18.884 12534.833 - 12592.070: 13.4817% ( 37) 00:30:18.884 12592.070 - 12649.307: 13.9381% ( 33) 00:30:18.884 12649.307 - 12706.543: 14.4497% ( 37) 00:30:18.884 12706.543 - 12763.780: 15.1549% ( 51) 00:30:18.884 12763.780 - 12821.017: 15.7080% ( 40) 00:30:18.884 12821.017 - 12878.253: 16.3579% ( 47) 00:30:18.884 12878.253 - 12935.490: 16.8556% ( 36) 00:30:18.884 12935.490 - 12992.727: 17.4226% ( 41) 00:30:18.884 12992.727 - 13049.963: 17.7406% ( 23) 00:30:18.884 13049.963 - 13107.200: 18.0725% ( 24) 00:30:18.884 13107.200 - 13164.437: 18.2522% ( 13) 00:30:18.884 13164.437 - 13221.673: 18.4735% ( 16) 00:30:18.884 13221.673 - 13278.910: 18.8053% ( 24) 00:30:18.884 13278.910 - 13336.147: 19.2063% ( 29) 00:30:18.884 13336.147 - 13393.383: 19.8009% ( 43) 00:30:18.884 13393.383 - 13450.620: 20.3402% ( 39) 00:30:18.884 13450.620 - 13507.857: 20.8656% ( 38) 00:30:18.884 13507.857 - 13565.093: 21.2804% ( 30) 00:30:18.884 13565.093 - 13622.330: 21.7506% ( 34) 00:30:18.884 13622.330 - 13679.567: 22.2345% ( 35) 00:30:18.884 13679.567 - 13736.803: 22.6632% ( 31) 00:30:18.884 13736.803 - 13794.040: 23.1471% ( 35) 00:30:18.884 13794.040 - 13851.277: 23.7140% ( 41) 00:30:18.884 13851.277 - 13908.514: 24.4469% ( 53) 00:30:18.884 13908.514 - 13965.750: 25.2212% ( 56) 00:30:18.884 13965.750 - 14022.987: 26.0371% ( 59) 00:30:18.884 14022.987 - 14080.224: 26.5902% ( 40) 00:30:18.884 14080.224 - 14137.460: 27.1847% ( 43) 00:30:18.884 14137.460 - 14194.697: 27.8623% ( 49) 00:30:18.884 14194.697 - 14251.934: 28.6090% ( 54) 00:30:18.884 14251.934 - 14309.170: 29.1067% ( 36) 00:30:18.884 14309.170 - 14366.407: 29.7013% ( 43) 00:30:18.884 14366.407 - 14423.644: 30.4204% ( 52) 00:30:18.884 14423.644 - 14480.880: 31.1809% ( 55) 00:30:18.884 14480.880 - 14538.117: 31.7478% ( 41) 00:30:18.884 14538.117 - 14595.354: 32.2594% ( 37) 00:30:18.884 14595.354 - 14652.590: 32.8540% ( 43) 00:30:18.884 14652.590 - 14767.064: 34.4441% ( 115) 00:30:18.884 14767.064 - 14881.537: 35.8545% ( 102) 00:30:18.884 14881.537 - 14996.010: 37.0299% ( 85) 00:30:18.884 14996.010 - 15110.484: 38.0669% ( 75) 00:30:18.884 15110.484 - 15224.957: 39.2561% ( 86) 00:30:18.884 15224.957 - 15339.431: 40.1687% ( 66) 00:30:18.884 15339.431 - 15453.904: 41.3855% ( 88) 00:30:18.884 15453.904 - 15568.377: 42.3534% ( 70) 00:30:18.884 15568.377 - 15682.851: 43.6394% ( 93) 00:30:18.884 15682.851 - 15797.324: 44.7594% ( 81) 00:30:18.884 15797.324 - 15911.797: 46.0177% ( 91) 00:30:18.884 15911.797 - 16026.271: 46.8750% ( 62) 00:30:18.884 16026.271 - 16140.744: 47.4972% ( 45) 00:30:18.884 16140.744 - 16255.217: 48.1333% ( 46) 00:30:18.884 16255.217 - 16369.691: 48.6864% ( 40) 00:30:18.884 16369.691 - 16484.164: 49.3363% ( 47) 00:30:18.884 16484.164 - 16598.638: 50.1383% ( 58) 00:30:18.884 16598.638 - 16713.111: 50.8573% ( 52) 00:30:18.884 16713.111 - 16827.584: 51.4104% ( 40) 00:30:18.884 16827.584 - 16942.058: 51.8529% ( 32) 00:30:18.884 16942.058 - 17056.531: 52.2677% ( 30) 00:30:18.884 17056.531 - 17171.004: 52.5719% ( 22) 00:30:18.884 17171.004 - 17285.478: 52.8899% ( 23) 00:30:18.884 17285.478 - 17399.951: 53.3324% ( 32) 00:30:18.884 17399.951 - 17514.424: 53.7887% ( 33) 00:30:18.884 17514.424 - 17628.898: 54.2312% ( 32) 00:30:18.884 17628.898 - 17743.371: 54.7013% ( 34) 00:30:18.884 17743.371 - 17857.845: 55.1715% ( 34) 00:30:18.884 17857.845 - 17972.318: 55.8075% ( 46) 00:30:18.884 17972.318 - 18086.791: 56.6095% ( 58) 00:30:18.884 18086.791 - 18201.265: 57.4668% ( 62) 00:30:18.884 18201.265 - 18315.738: 58.2688% ( 58) 00:30:18.884 18315.738 - 18430.211: 59.0293% ( 55) 00:30:18.884 18430.211 - 18544.685: 59.6930% ( 48) 00:30:18.884 18544.685 - 18659.158: 60.3706% ( 49) 00:30:18.884 18659.158 - 18773.631: 61.1449% ( 56) 00:30:18.884 18773.631 - 18888.105: 62.0160% ( 63) 00:30:18.884 18888.105 - 19002.578: 62.8180% ( 58) 00:30:18.884 19002.578 - 19117.052: 63.3158% ( 36) 00:30:18.884 19117.052 - 19231.525: 64.2008% ( 64) 00:30:18.884 19231.525 - 19345.998: 65.0304% ( 60) 00:30:18.884 19345.998 - 19460.472: 65.3761% ( 25) 00:30:18.884 19460.472 - 19574.945: 65.6941% ( 23) 00:30:18.884 19574.945 - 19689.418: 65.9569% ( 19) 00:30:18.884 19689.418 - 19803.892: 66.2334% ( 20) 00:30:18.884 19803.892 - 19918.365: 66.5376% ( 22) 00:30:18.884 19918.365 - 20032.838: 66.8418% ( 22) 00:30:18.884 20032.838 - 20147.312: 67.2843% ( 32) 00:30:18.884 20147.312 - 20261.785: 67.6576% ( 27) 00:30:18.884 20261.785 - 20376.259: 67.9065% ( 18) 00:30:18.884 20376.259 - 20490.732: 67.9895% ( 6) 00:30:18.884 20490.732 - 20605.205: 68.0863% ( 7) 00:30:18.884 20605.205 - 20719.679: 68.1969% ( 8) 00:30:18.884 20719.679 - 20834.152: 68.4181% ( 16) 00:30:18.884 20834.152 - 20948.625: 68.6947% ( 20) 00:30:18.884 20948.625 - 21063.099: 68.8883% ( 14) 00:30:18.884 21063.099 - 21177.572: 69.3169% ( 31) 00:30:18.884 21177.572 - 21292.045: 69.6488% ( 24) 00:30:18.884 21292.045 - 21406.519: 70.0360% ( 28) 00:30:18.884 21406.519 - 21520.992: 70.6167% ( 42) 00:30:18.884 21520.992 - 21635.466: 71.3634% ( 54) 00:30:18.884 21635.466 - 21749.939: 72.4281% ( 77) 00:30:18.884 21749.939 - 21864.412: 73.6726% ( 90) 00:30:18.884 21864.412 - 21978.886: 75.0830% ( 102) 00:30:18.884 21978.886 - 22093.359: 76.8114% ( 125) 00:30:18.884 22093.359 - 22207.832: 79.3142% ( 181) 00:30:18.884 22207.832 - 22322.306: 83.1444% ( 277) 00:30:18.884 22322.306 - 22436.779: 86.0896% ( 213) 00:30:18.884 22436.779 - 22551.252: 88.4541% ( 171) 00:30:18.884 22551.252 - 22665.726: 89.8368% ( 100) 00:30:18.884 22665.726 - 22780.199: 91.4132% ( 114) 00:30:18.884 22780.199 - 22894.672: 92.4779% ( 77) 00:30:18.884 22894.672 - 23009.146: 93.3213% ( 61) 00:30:18.884 23009.146 - 23123.619: 94.9806% ( 120) 00:30:18.884 23123.619 - 23238.093: 95.6720% ( 50) 00:30:18.884 23238.093 - 23352.566: 96.1975% ( 38) 00:30:18.884 23352.566 - 23467.039: 96.6399% ( 32) 00:30:18.884 23467.039 - 23581.513: 96.9441% ( 22) 00:30:18.884 23581.513 - 23695.986: 97.2069% ( 19) 00:30:18.884 23695.986 - 23810.459: 97.4281% ( 16) 00:30:18.884 23810.459 - 23924.933: 97.6217% ( 14) 00:30:18.884 23924.933 - 24039.406: 97.7738% ( 11) 00:30:18.884 24039.406 - 24153.879: 97.8844% ( 8) 00:30:18.885 24153.879 - 24268.353: 97.9535% ( 5) 00:30:18.885 24268.353 - 24382.826: 98.0227% ( 5) 00:30:18.885 24382.826 - 24497.300: 98.0918% ( 5) 00:30:18.885 24497.300 - 24611.773: 98.1333% ( 3) 00:30:18.885 24611.773 - 24726.246: 98.1610% ( 2) 00:30:18.885 24726.246 - 24840.720: 98.1886% ( 2) 00:30:18.885 24840.720 - 24955.193: 98.2163% ( 2) 00:30:18.885 24955.193 - 25069.666: 98.2301% ( 1) 00:30:18.885 36860.423 - 37089.369: 98.2439% ( 1) 00:30:18.885 37089.369 - 37318.316: 98.3407% ( 7) 00:30:18.885 37318.316 - 37547.263: 98.4237% ( 6) 00:30:18.885 37547.263 - 37776.210: 98.5066% ( 6) 00:30:18.885 37776.210 - 38005.156: 98.5896% ( 6) 00:30:18.885 38005.156 - 38234.103: 98.6864% ( 7) 00:30:18.885 38234.103 - 38463.050: 98.7832% ( 7) 00:30:18.885 38463.050 - 38691.997: 98.8662% ( 6) 00:30:18.885 38691.997 - 38920.943: 98.9629% ( 7) 00:30:18.885 38920.943 - 39149.890: 99.0459% ( 6) 00:30:18.885 39149.890 - 39378.837: 99.1150% ( 5) 00:30:18.885 47391.972 - 47620.919: 99.1704% ( 4) 00:30:18.885 47620.919 - 47849.866: 99.2671% ( 7) 00:30:18.885 47849.866 - 48078.812: 99.3501% ( 6) 00:30:18.885 48078.812 - 48307.759: 99.4469% ( 7) 00:30:18.885 48307.759 - 48536.706: 99.5437% ( 7) 00:30:18.885 48536.706 - 48765.652: 99.6543% ( 8) 00:30:18.885 48765.652 - 48994.599: 99.7511% ( 7) 00:30:18.885 48994.599 - 49223.546: 99.8617% ( 8) 00:30:18.885 49223.546 - 49452.493: 99.9585% ( 7) 00:30:18.885 49452.493 - 49681.439: 100.0000% ( 3) 00:30:18.885 00:30:18.885 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:30:18.885 ============================================================================== 00:30:18.885 Range in us Cumulative IO count 00:30:18.885 10874.969 - 10932.206: 0.0138% ( 1) 00:30:18.885 10989.443 - 11046.679: 0.0277% ( 1) 00:30:18.885 11046.679 - 11103.916: 0.0415% ( 1) 00:30:18.885 11103.916 - 11161.153: 0.0553% ( 1) 00:30:18.885 11161.153 - 11218.390: 0.0691% ( 1) 00:30:18.885 11218.390 - 11275.626: 0.1798% ( 8) 00:30:18.885 11275.626 - 11332.863: 0.4701% ( 21) 00:30:18.885 11332.863 - 11390.100: 0.8158% ( 25) 00:30:18.885 11390.100 - 11447.336: 1.2030% ( 28) 00:30:18.885 11447.336 - 11504.573: 1.7008% ( 36) 00:30:18.885 11504.573 - 11561.810: 2.4336% ( 53) 00:30:18.885 11561.810 - 11619.046: 3.1388% ( 51) 00:30:18.885 11619.046 - 11676.283: 3.5951% ( 33) 00:30:18.885 11676.283 - 11733.520: 4.2035% ( 44) 00:30:18.885 11733.520 - 11790.756: 4.7843% ( 42) 00:30:18.885 11790.756 - 11847.993: 5.6001% ( 59) 00:30:18.885 11847.993 - 11905.230: 6.0702% ( 34) 00:30:18.885 11905.230 - 11962.466: 6.8999% ( 60) 00:30:18.885 11962.466 - 12019.703: 7.7019% ( 58) 00:30:18.885 12019.703 - 12076.940: 8.4624% ( 55) 00:30:18.885 12076.940 - 12134.176: 9.2644% ( 58) 00:30:18.885 12134.176 - 12191.413: 10.4535% ( 86) 00:30:18.885 12191.413 - 12248.650: 10.9790% ( 38) 00:30:18.885 12248.650 - 12305.886: 11.5736% ( 43) 00:30:18.885 12305.886 - 12363.123: 11.8778% ( 22) 00:30:18.885 12363.123 - 12420.360: 12.2096% ( 24) 00:30:18.885 12420.360 - 12477.597: 12.5138% ( 22) 00:30:18.885 12477.597 - 12534.833: 12.8595% ( 25) 00:30:18.885 12534.833 - 12592.070: 13.1914% ( 24) 00:30:18.885 12592.070 - 12649.307: 13.4264% ( 17) 00:30:18.885 12649.307 - 12706.543: 13.8551% ( 31) 00:30:18.885 12706.543 - 12763.780: 14.1593% ( 22) 00:30:18.885 12763.780 - 12821.017: 14.6156% ( 33) 00:30:18.885 12821.017 - 12878.253: 15.2793% ( 48) 00:30:18.885 12878.253 - 12935.490: 15.7356% ( 33) 00:30:18.885 12935.490 - 12992.727: 16.2058% ( 34) 00:30:18.885 12992.727 - 13049.963: 16.8971% ( 50) 00:30:18.885 13049.963 - 13107.200: 17.3396% ( 32) 00:30:18.885 13107.200 - 13164.437: 17.8374% ( 36) 00:30:18.885 13164.437 - 13221.673: 18.3490% ( 37) 00:30:18.885 13221.673 - 13278.910: 18.7915% ( 32) 00:30:18.885 13278.910 - 13336.147: 19.1925% ( 29) 00:30:18.885 13336.147 - 13393.383: 19.4275% ( 17) 00:30:18.885 13393.383 - 13450.620: 19.6903% ( 19) 00:30:18.885 13450.620 - 13507.857: 19.9806% ( 21) 00:30:18.885 13507.857 - 13565.093: 20.6720% ( 50) 00:30:18.885 13565.093 - 13622.330: 21.1975% ( 38) 00:30:18.885 13622.330 - 13679.567: 21.7367% ( 39) 00:30:18.885 13679.567 - 13736.803: 22.3451% ( 44) 00:30:18.885 13736.803 - 13794.040: 23.0227% ( 49) 00:30:18.885 13794.040 - 13851.277: 23.9353% ( 66) 00:30:18.885 13851.277 - 13908.514: 24.5575% ( 45) 00:30:18.885 13908.514 - 13965.750: 25.1936% ( 46) 00:30:18.885 13965.750 - 14022.987: 26.1062% ( 66) 00:30:18.885 14022.987 - 14080.224: 27.1018% ( 72) 00:30:18.885 14080.224 - 14137.460: 27.8761% ( 56) 00:30:18.885 14137.460 - 14194.697: 28.6228% ( 54) 00:30:18.885 14194.697 - 14251.934: 29.5492% ( 67) 00:30:18.885 14251.934 - 14309.170: 30.1023% ( 40) 00:30:18.885 14309.170 - 14366.407: 30.5448% ( 32) 00:30:18.885 14366.407 - 14423.644: 30.9873% ( 32) 00:30:18.885 14423.644 - 14480.880: 31.3744% ( 28) 00:30:18.885 14480.880 - 14538.117: 31.7893% ( 30) 00:30:18.885 14538.117 - 14595.354: 32.1073% ( 23) 00:30:18.885 14595.354 - 14652.590: 32.6327% ( 38) 00:30:18.885 14652.590 - 14767.064: 33.6560% ( 74) 00:30:18.885 14767.064 - 14881.537: 35.1493% ( 108) 00:30:18.885 14881.537 - 14996.010: 36.6150% ( 106) 00:30:18.885 14996.010 - 15110.484: 38.4403% ( 132) 00:30:18.885 15110.484 - 15224.957: 39.6156% ( 85) 00:30:18.885 15224.957 - 15339.431: 40.8462% ( 89) 00:30:18.885 15339.431 - 15453.904: 42.5055% ( 120) 00:30:18.885 15453.904 - 15568.377: 43.8191% ( 95) 00:30:18.885 15568.377 - 15682.851: 44.3999% ( 42) 00:30:18.885 15682.851 - 15797.324: 45.2572% ( 62) 00:30:18.885 15797.324 - 15911.797: 46.2942% ( 75) 00:30:18.885 15911.797 - 16026.271: 47.3037% ( 73) 00:30:18.885 16026.271 - 16140.744: 48.3269% ( 74) 00:30:18.885 16140.744 - 16255.217: 49.3225% ( 72) 00:30:18.885 16255.217 - 16369.691: 49.9723% ( 47) 00:30:18.885 16369.691 - 16484.164: 50.6222% ( 47) 00:30:18.885 16484.164 - 16598.638: 51.0509% ( 31) 00:30:18.885 16598.638 - 16713.111: 51.3827% ( 24) 00:30:18.885 16713.111 - 16827.584: 51.6731% ( 21) 00:30:18.885 16827.584 - 16942.058: 52.0603% ( 28) 00:30:18.885 16942.058 - 17056.531: 52.4613% ( 29) 00:30:18.885 17056.531 - 17171.004: 52.9452% ( 35) 00:30:18.885 17171.004 - 17285.478: 53.2494% ( 22) 00:30:18.885 17285.478 - 17399.951: 53.5537% ( 22) 00:30:18.885 17399.951 - 17514.424: 53.8440% ( 21) 00:30:18.885 17514.424 - 17628.898: 54.1621% ( 23) 00:30:18.885 17628.898 - 17743.371: 54.5907% ( 31) 00:30:18.885 17743.371 - 17857.845: 55.1576% ( 41) 00:30:18.885 17857.845 - 17972.318: 55.8075% ( 47) 00:30:18.885 17972.318 - 18086.791: 56.7616% ( 69) 00:30:18.885 18086.791 - 18201.265: 57.2732% ( 37) 00:30:18.885 18201.265 - 18315.738: 58.0061% ( 53) 00:30:18.885 18315.738 - 18430.211: 58.7942% ( 57) 00:30:18.885 18430.211 - 18544.685: 59.4027% ( 44) 00:30:18.885 18544.685 - 18659.158: 60.2600% ( 62) 00:30:18.885 18659.158 - 18773.631: 60.8545% ( 43) 00:30:18.885 18773.631 - 18888.105: 61.5874% ( 53) 00:30:18.885 18888.105 - 19002.578: 62.8733% ( 93) 00:30:18.885 19002.578 - 19117.052: 64.0072% ( 82) 00:30:18.885 19117.052 - 19231.525: 64.4635% ( 33) 00:30:18.885 19231.525 - 19345.998: 65.0028% ( 39) 00:30:18.885 19345.998 - 19460.472: 65.4452% ( 32) 00:30:18.885 19460.472 - 19574.945: 65.7356% ( 21) 00:30:18.885 19574.945 - 19689.418: 66.0537% ( 23) 00:30:18.885 19689.418 - 19803.892: 66.3855% ( 24) 00:30:18.885 19803.892 - 19918.365: 66.7312% ( 25) 00:30:18.885 19918.365 - 20032.838: 67.0769% ( 25) 00:30:18.885 20032.838 - 20147.312: 67.2705% ( 14) 00:30:18.885 20147.312 - 20261.785: 67.5194% ( 18) 00:30:18.885 20261.785 - 20376.259: 67.7406% ( 16) 00:30:18.885 20376.259 - 20490.732: 67.9204% ( 13) 00:30:18.885 20490.732 - 20605.205: 68.1278% ( 15) 00:30:18.885 20605.205 - 20719.679: 68.3490% ( 16) 00:30:18.885 20719.679 - 20834.152: 68.5288% ( 13) 00:30:18.885 20834.152 - 20948.625: 68.7638% ( 17) 00:30:18.885 20948.625 - 21063.099: 69.2201% ( 33) 00:30:18.885 21063.099 - 21177.572: 69.5243% ( 22) 00:30:18.885 21177.572 - 21292.045: 70.0083% ( 35) 00:30:18.885 21292.045 - 21406.519: 70.6305% ( 45) 00:30:18.885 21406.519 - 21520.992: 71.3081% ( 49) 00:30:18.885 21520.992 - 21635.466: 72.3728% ( 77) 00:30:18.885 21635.466 - 21749.939: 73.5205% ( 83) 00:30:18.885 21749.939 - 21864.412: 74.7096% ( 86) 00:30:18.885 21864.412 - 21978.886: 76.3136% ( 116) 00:30:18.885 21978.886 - 22093.359: 77.9591% ( 119) 00:30:18.885 22093.359 - 22207.832: 80.2129% ( 163) 00:30:18.885 22207.832 - 22322.306: 82.9231% ( 196) 00:30:18.885 22322.306 - 22436.779: 86.4768% ( 257) 00:30:18.885 22436.779 - 22551.252: 89.1593% ( 194) 00:30:18.885 22551.252 - 22665.726: 90.0166% ( 62) 00:30:18.885 22665.726 - 22780.199: 90.9707% ( 69) 00:30:18.885 22780.199 - 22894.672: 92.0216% ( 76) 00:30:18.885 22894.672 - 23009.146: 92.9618% ( 68) 00:30:18.885 23009.146 - 23123.619: 94.1095% ( 83) 00:30:18.885 23123.619 - 23238.093: 94.8147% ( 51) 00:30:18.885 23238.093 - 23352.566: 95.8656% ( 76) 00:30:18.885 23352.566 - 23467.039: 96.3634% ( 36) 00:30:18.885 23467.039 - 23581.513: 96.7782% ( 30) 00:30:18.885 23581.513 - 23695.986: 97.0686% ( 21) 00:30:18.885 23695.986 - 23810.459: 97.3590% ( 21) 00:30:18.885 23810.459 - 23924.933: 97.5802% ( 16) 00:30:18.885 23924.933 - 24039.406: 97.7461% ( 12) 00:30:18.885 24039.406 - 24153.879: 97.9259% ( 13) 00:30:18.885 24153.879 - 24268.353: 98.0365% ( 8) 00:30:18.885 24268.353 - 24382.826: 98.1471% ( 8) 00:30:18.885 24382.826 - 24497.300: 98.2024% ( 4) 00:30:18.885 24497.300 - 24611.773: 98.2301% ( 2) 00:30:18.885 35715.689 - 35944.636: 98.2439% ( 1) 00:30:18.885 35944.636 - 36173.583: 98.3407% ( 7) 00:30:18.885 36173.583 - 36402.529: 98.4237% ( 6) 00:30:18.885 36402.529 - 36631.476: 98.5205% ( 7) 00:30:18.885 36631.476 - 36860.423: 98.6173% ( 7) 00:30:18.885 36860.423 - 37089.369: 98.7140% ( 7) 00:30:18.885 37089.369 - 37318.316: 98.7970% ( 6) 00:30:18.885 37318.316 - 37547.263: 98.8800% ( 6) 00:30:18.885 37547.263 - 37776.210: 98.9768% ( 7) 00:30:18.885 37776.210 - 38005.156: 99.0736% ( 7) 00:30:18.885 38005.156 - 38234.103: 99.1150% ( 3) 00:30:18.885 46476.185 - 46705.132: 99.2118% ( 7) 00:30:18.885 46705.132 - 46934.079: 99.3086% ( 7) 00:30:18.885 46934.079 - 47163.025: 99.3916% ( 6) 00:30:18.885 47163.025 - 47391.972: 99.4884% ( 7) 00:30:18.885 47391.972 - 47620.919: 99.5852% ( 7) 00:30:18.885 47620.919 - 47849.866: 99.6681% ( 6) 00:30:18.885 47849.866 - 48078.812: 99.7649% ( 7) 00:30:18.885 48078.812 - 48307.759: 99.8479% ( 6) 00:30:18.885 48307.759 - 48536.706: 99.9447% ( 7) 00:30:18.885 48536.706 - 48765.652: 100.0000% ( 4) 00:30:18.885 00:30:18.885 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:30:18.885 ============================================================================== 00:30:18.885 Range in us Cumulative IO count 00:30:18.885 10932.206 - 10989.443: 0.0553% ( 4) 00:30:18.885 10989.443 - 11046.679: 0.0968% ( 3) 00:30:18.885 11046.679 - 11103.916: 0.2074% ( 8) 00:30:18.885 11103.916 - 11161.153: 0.3042% ( 7) 00:30:18.885 11161.153 - 11218.390: 0.4148% ( 8) 00:30:18.885 11218.390 - 11275.626: 0.6914% ( 20) 00:30:18.885 11275.626 - 11332.863: 1.2583% ( 41) 00:30:18.885 11332.863 - 11390.100: 1.6178% ( 26) 00:30:18.885 11390.100 - 11447.336: 2.2124% ( 43) 00:30:18.885 11447.336 - 11504.573: 2.6410% ( 31) 00:30:18.885 11504.573 - 11561.810: 3.1250% ( 35) 00:30:18.885 11561.810 - 11619.046: 3.4983% ( 27) 00:30:18.885 11619.046 - 11676.283: 3.9685% ( 34) 00:30:18.885 11676.283 - 11733.520: 4.5216% ( 40) 00:30:18.885 11733.520 - 11790.756: 4.9087% ( 28) 00:30:18.885 11790.756 - 11847.993: 5.6001% ( 50) 00:30:18.885 11847.993 - 11905.230: 6.2085% ( 44) 00:30:18.885 11905.230 - 11962.466: 6.8722% ( 48) 00:30:18.885 11962.466 - 12019.703: 7.6881% ( 59) 00:30:18.885 12019.703 - 12076.940: 8.6145% ( 67) 00:30:18.885 12076.940 - 12134.176: 9.5824% ( 70) 00:30:18.885 12134.176 - 12191.413: 10.3014% ( 52) 00:30:18.885 12191.413 - 12248.650: 11.2417% ( 68) 00:30:18.885 12248.650 - 12305.886: 11.9054% ( 48) 00:30:18.886 12305.886 - 12363.123: 12.3756% ( 34) 00:30:18.886 12363.123 - 12420.360: 12.9425% ( 41) 00:30:18.886 12420.360 - 12477.597: 13.6062% ( 48) 00:30:18.886 12477.597 - 12534.833: 14.3529% ( 54) 00:30:18.886 12534.833 - 12592.070: 14.7815% ( 31) 00:30:18.886 12592.070 - 12649.307: 15.0442% ( 19) 00:30:18.886 12649.307 - 12706.543: 15.2378% ( 14) 00:30:18.886 12706.543 - 12763.780: 15.5006% ( 19) 00:30:18.886 12763.780 - 12821.017: 15.7356% ( 17) 00:30:18.886 12821.017 - 12878.253: 16.1228% ( 28) 00:30:18.886 12878.253 - 12935.490: 16.5791% ( 33) 00:30:18.886 12935.490 - 12992.727: 17.0354% ( 33) 00:30:18.886 12992.727 - 13049.963: 17.3673% ( 24) 00:30:18.886 13049.963 - 13107.200: 17.7821% ( 30) 00:30:18.886 13107.200 - 13164.437: 18.3075% ( 38) 00:30:18.886 13164.437 - 13221.673: 19.1372% ( 60) 00:30:18.886 13221.673 - 13278.910: 19.5382% ( 29) 00:30:18.886 13278.910 - 13336.147: 19.8285% ( 21) 00:30:18.886 13336.147 - 13393.383: 20.2434% ( 30) 00:30:18.886 13393.383 - 13450.620: 20.6858% ( 32) 00:30:18.886 13450.620 - 13507.857: 21.3357% ( 47) 00:30:18.886 13507.857 - 13565.093: 21.9580% ( 45) 00:30:18.886 13565.093 - 13622.330: 22.6493% ( 50) 00:30:18.886 13622.330 - 13679.567: 23.3960% ( 54) 00:30:18.886 13679.567 - 13736.803: 24.0874% ( 50) 00:30:18.886 13736.803 - 13794.040: 24.5575% ( 34) 00:30:18.886 13794.040 - 13851.277: 25.0000% ( 32) 00:30:18.886 13851.277 - 13908.514: 25.4563% ( 33) 00:30:18.886 13908.514 - 13965.750: 25.9679% ( 37) 00:30:18.886 13965.750 - 14022.987: 26.5348% ( 41) 00:30:18.886 14022.987 - 14080.224: 26.9773% ( 32) 00:30:18.886 14080.224 - 14137.460: 27.3645% ( 28) 00:30:18.886 14137.460 - 14194.697: 27.7655% ( 29) 00:30:18.886 14194.697 - 14251.934: 28.4569% ( 50) 00:30:18.886 14251.934 - 14309.170: 29.1621% ( 51) 00:30:18.886 14309.170 - 14366.407: 29.7290% ( 41) 00:30:18.886 14366.407 - 14423.644: 30.4895% ( 55) 00:30:18.886 14423.644 - 14480.880: 31.0288% ( 39) 00:30:18.886 14480.880 - 14538.117: 31.6510% ( 45) 00:30:18.886 14538.117 - 14595.354: 32.2871% ( 46) 00:30:18.886 14595.354 - 14652.590: 32.9369% ( 47) 00:30:18.886 14652.590 - 14767.064: 34.5686% ( 118) 00:30:18.886 14767.064 - 14881.537: 35.8960% ( 96) 00:30:18.886 14881.537 - 14996.010: 37.0022% ( 80) 00:30:18.886 14996.010 - 15110.484: 38.3850% ( 100) 00:30:18.886 15110.484 - 15224.957: 39.6294% ( 90) 00:30:18.886 15224.957 - 15339.431: 40.9845% ( 98) 00:30:18.886 15339.431 - 15453.904: 42.2428% ( 91) 00:30:18.886 15453.904 - 15568.377: 42.9618% ( 52) 00:30:18.886 15568.377 - 15682.851: 43.7362% ( 56) 00:30:18.886 15682.851 - 15797.324: 44.7179% ( 71) 00:30:18.886 15797.324 - 15911.797: 46.0454% ( 96) 00:30:18.886 15911.797 - 16026.271: 47.2622% ( 88) 00:30:18.886 16026.271 - 16140.744: 48.3131% ( 76) 00:30:18.886 16140.744 - 16255.217: 49.2118% ( 65) 00:30:18.886 16255.217 - 16369.691: 49.9723% ( 55) 00:30:18.886 16369.691 - 16484.164: 50.5254% ( 40) 00:30:18.886 16484.164 - 16598.638: 50.9817% ( 33) 00:30:18.886 16598.638 - 16713.111: 51.3689% ( 28) 00:30:18.886 16713.111 - 16827.584: 51.7146% ( 25) 00:30:18.886 16827.584 - 16942.058: 52.1156% ( 29) 00:30:18.886 16942.058 - 17056.531: 52.3092% ( 14) 00:30:18.886 17056.531 - 17171.004: 52.4889% ( 13) 00:30:18.886 17171.004 - 17285.478: 52.8208% ( 24) 00:30:18.886 17285.478 - 17399.951: 53.3186% ( 36) 00:30:18.886 17399.951 - 17514.424: 53.7611% ( 32) 00:30:18.886 17514.424 - 17628.898: 54.1759% ( 30) 00:30:18.886 17628.898 - 17743.371: 54.5077% ( 24) 00:30:18.886 17743.371 - 17857.845: 54.9502% ( 32) 00:30:18.886 17857.845 - 17972.318: 55.3374% ( 28) 00:30:18.886 17972.318 - 18086.791: 55.8628% ( 38) 00:30:18.886 18086.791 - 18201.265: 56.4021% ( 39) 00:30:18.886 18201.265 - 18315.738: 57.1488% ( 54) 00:30:18.886 18315.738 - 18430.211: 58.2550% ( 80) 00:30:18.886 18430.211 - 18544.685: 59.3059% ( 76) 00:30:18.886 18544.685 - 18659.158: 60.1493% ( 61) 00:30:18.886 18659.158 - 18773.631: 61.0619% ( 66) 00:30:18.886 18773.631 - 18888.105: 61.6012% ( 39) 00:30:18.886 18888.105 - 19002.578: 63.1637% ( 113) 00:30:18.886 19002.578 - 19117.052: 63.8274% ( 48) 00:30:18.886 19117.052 - 19231.525: 64.4773% ( 47) 00:30:18.886 19231.525 - 19345.998: 64.8921% ( 30) 00:30:18.886 19345.998 - 19460.472: 65.1549% ( 19) 00:30:18.886 19460.472 - 19574.945: 65.3761% ( 16) 00:30:18.886 19574.945 - 19689.418: 65.5697% ( 14) 00:30:18.886 19689.418 - 19803.892: 65.8739% ( 22) 00:30:18.886 19803.892 - 19918.365: 66.2749% ( 29) 00:30:18.886 19918.365 - 20032.838: 66.4685% ( 14) 00:30:18.886 20032.838 - 20147.312: 66.6344% ( 12) 00:30:18.886 20147.312 - 20261.785: 66.8142% ( 13) 00:30:18.886 20261.785 - 20376.259: 67.0077% ( 14) 00:30:18.886 20376.259 - 20490.732: 67.1598% ( 11) 00:30:18.886 20490.732 - 20605.205: 67.4226% ( 19) 00:30:18.886 20605.205 - 20719.679: 67.9480% ( 38) 00:30:18.886 20719.679 - 20834.152: 68.3213% ( 27) 00:30:18.886 20834.152 - 20948.625: 68.5564% ( 17) 00:30:18.886 20948.625 - 21063.099: 68.8468% ( 21) 00:30:18.886 21063.099 - 21177.572: 69.2478% ( 29) 00:30:18.886 21177.572 - 21292.045: 69.7456% ( 36) 00:30:18.886 21292.045 - 21406.519: 70.4093% ( 48) 00:30:18.886 21406.519 - 21520.992: 71.1975% ( 57) 00:30:18.886 21520.992 - 21635.466: 72.2069% ( 73) 00:30:18.886 21635.466 - 21749.939: 73.3960% ( 86) 00:30:18.886 21749.939 - 21864.412: 74.4469% ( 76) 00:30:18.886 21864.412 - 21978.886: 75.8296% ( 100) 00:30:18.886 21978.886 - 22093.359: 77.2539% ( 103) 00:30:18.886 22093.359 - 22207.832: 79.9779% ( 197) 00:30:18.886 22207.832 - 22322.306: 82.3977% ( 175) 00:30:18.886 22322.306 - 22436.779: 85.4812% ( 223) 00:30:18.886 22436.779 - 22551.252: 88.0808% ( 188) 00:30:18.886 22551.252 - 22665.726: 89.4358% ( 98) 00:30:18.886 22665.726 - 22780.199: 90.6941% ( 91) 00:30:18.886 22780.199 - 22894.672: 91.6897% ( 72) 00:30:18.886 22894.672 - 23009.146: 92.7129% ( 74) 00:30:18.886 23009.146 - 23123.619: 93.9989% ( 93) 00:30:18.886 23123.619 - 23238.093: 94.7317% ( 53) 00:30:18.886 23238.093 - 23352.566: 95.9071% ( 85) 00:30:18.886 23352.566 - 23467.039: 96.5017% ( 43) 00:30:18.886 23467.039 - 23581.513: 96.8335% ( 24) 00:30:18.886 23581.513 - 23695.986: 97.1930% ( 26) 00:30:18.886 23695.986 - 23810.459: 97.4972% ( 22) 00:30:18.886 23810.459 - 23924.933: 97.7461% ( 18) 00:30:18.886 23924.933 - 24039.406: 97.9535% ( 15) 00:30:18.886 24039.406 - 24153.879: 98.0918% ( 10) 00:30:18.886 24153.879 - 24268.353: 98.1610% ( 5) 00:30:18.886 24268.353 - 24382.826: 98.2163% ( 4) 00:30:18.886 24382.826 - 24497.300: 98.2301% ( 1) 00:30:18.886 33426.222 - 33655.169: 98.2439% ( 1) 00:30:18.886 33655.169 - 33884.115: 98.3269% ( 6) 00:30:18.886 33884.115 - 34113.062: 98.4098% ( 6) 00:30:18.886 34113.062 - 34342.009: 98.5066% ( 7) 00:30:18.886 34342.009 - 34570.955: 98.6034% ( 7) 00:30:18.886 34570.955 - 34799.902: 98.7002% ( 7) 00:30:18.886 34799.902 - 35028.849: 98.7832% ( 6) 00:30:18.886 35028.849 - 35257.796: 98.8662% ( 6) 00:30:18.886 35257.796 - 35486.742: 98.9629% ( 7) 00:30:18.886 35486.742 - 35715.689: 99.0597% ( 7) 00:30:18.886 35715.689 - 35944.636: 99.1150% ( 4) 00:30:18.886 44415.665 - 44644.611: 99.1980% ( 6) 00:30:18.886 44644.611 - 44873.558: 99.2948% ( 7) 00:30:18.886 44873.558 - 45102.505: 99.3916% ( 7) 00:30:18.886 45102.505 - 45331.452: 99.4746% ( 6) 00:30:18.886 45331.452 - 45560.398: 99.5852% ( 8) 00:30:18.886 45560.398 - 45789.345: 99.6820% ( 7) 00:30:18.886 45789.345 - 46018.292: 99.7649% ( 6) 00:30:18.886 46018.292 - 46247.238: 99.8617% ( 7) 00:30:18.886 46247.238 - 46476.185: 99.9447% ( 6) 00:30:18.886 46476.185 - 46705.132: 100.0000% ( 4) 00:30:18.886 00:30:18.886 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:30:18.886 ============================================================================== 00:30:18.886 Range in us Cumulative IO count 00:30:18.886 10817.733 - 10874.969: 0.0277% ( 2) 00:30:18.886 10932.206 - 10989.443: 0.0415% ( 1) 00:30:18.886 11046.679 - 11103.916: 0.0968% ( 4) 00:30:18.886 11103.916 - 11161.153: 0.1521% ( 4) 00:30:18.886 11161.153 - 11218.390: 0.3180% ( 12) 00:30:18.886 11218.390 - 11275.626: 0.5116% ( 14) 00:30:18.886 11275.626 - 11332.863: 0.8158% ( 22) 00:30:18.886 11332.863 - 11390.100: 1.2168% ( 29) 00:30:18.886 11390.100 - 11447.336: 1.6455% ( 31) 00:30:18.886 11447.336 - 11504.573: 2.1294% ( 35) 00:30:18.886 11504.573 - 11561.810: 2.8761% ( 54) 00:30:18.886 11561.810 - 11619.046: 3.6366% ( 55) 00:30:18.886 11619.046 - 11676.283: 4.3418% ( 51) 00:30:18.886 11676.283 - 11733.520: 4.9917% ( 47) 00:30:18.886 11733.520 - 11790.756: 5.6139% ( 45) 00:30:18.886 11790.756 - 11847.993: 6.4436% ( 60) 00:30:18.886 11847.993 - 11905.230: 7.1764% ( 53) 00:30:18.886 11905.230 - 11962.466: 7.8540% ( 49) 00:30:18.886 11962.466 - 12019.703: 8.4071% ( 40) 00:30:18.886 12019.703 - 12076.940: 8.8772% ( 34) 00:30:18.886 12076.940 - 12134.176: 9.4718% ( 43) 00:30:18.886 12134.176 - 12191.413: 10.1770% ( 51) 00:30:18.886 12191.413 - 12248.650: 10.8822% ( 51) 00:30:18.886 12248.650 - 12305.886: 11.4215% ( 39) 00:30:18.886 12305.886 - 12363.123: 12.4309% ( 73) 00:30:18.886 12363.123 - 12420.360: 13.0393% ( 44) 00:30:18.886 12420.360 - 12477.597: 13.7030% ( 48) 00:30:18.886 12477.597 - 12534.833: 14.1178% ( 30) 00:30:18.886 12534.833 - 12592.070: 14.5879% ( 34) 00:30:18.886 12592.070 - 12649.307: 15.1134% ( 38) 00:30:18.886 12649.307 - 12706.543: 15.6665% ( 40) 00:30:18.886 12706.543 - 12763.780: 16.0537% ( 28) 00:30:18.886 12763.780 - 12821.017: 16.5791% ( 38) 00:30:18.886 12821.017 - 12878.253: 17.0907% ( 37) 00:30:18.886 12878.253 - 12935.490: 17.4779% ( 28) 00:30:18.886 12935.490 - 12992.727: 17.8650% ( 28) 00:30:18.886 12992.727 - 13049.963: 18.2522% ( 28) 00:30:18.886 13049.963 - 13107.200: 18.5841% ( 24) 00:30:18.886 13107.200 - 13164.437: 18.7777% ( 14) 00:30:18.886 13164.437 - 13221.673: 19.0680% ( 21) 00:30:18.886 13221.673 - 13278.910: 19.2616% ( 14) 00:30:18.886 13278.910 - 13336.147: 19.4967% ( 17) 00:30:18.886 13336.147 - 13393.383: 19.9945% ( 36) 00:30:18.886 13393.383 - 13450.620: 20.2295% ( 17) 00:30:18.886 13450.620 - 13507.857: 20.5614% ( 24) 00:30:18.886 13507.857 - 13565.093: 20.9347% ( 27) 00:30:18.886 13565.093 - 13622.330: 21.3910% ( 33) 00:30:18.886 13622.330 - 13679.567: 21.8888% ( 36) 00:30:18.886 13679.567 - 13736.803: 22.4696% ( 42) 00:30:18.886 13736.803 - 13794.040: 23.1333% ( 48) 00:30:18.886 13794.040 - 13851.277: 23.5896% ( 33) 00:30:18.886 13851.277 - 13908.514: 24.0183% ( 31) 00:30:18.886 13908.514 - 13965.750: 24.4192% ( 29) 00:30:18.886 13965.750 - 14022.987: 24.7788% ( 26) 00:30:18.886 14022.987 - 14080.224: 25.2489% ( 34) 00:30:18.886 14080.224 - 14137.460: 25.6084% ( 26) 00:30:18.886 14137.460 - 14194.697: 26.1477% ( 39) 00:30:18.887 14194.697 - 14251.934: 26.7561% ( 44) 00:30:18.887 14251.934 - 14309.170: 27.5028% ( 54) 00:30:18.887 14309.170 - 14366.407: 28.5398% ( 75) 00:30:18.887 14366.407 - 14423.644: 29.3971% ( 62) 00:30:18.887 14423.644 - 14480.880: 30.2268% ( 60) 00:30:18.887 14480.880 - 14538.117: 31.0149% ( 57) 00:30:18.887 14538.117 - 14595.354: 32.2456% ( 89) 00:30:18.887 14595.354 - 14652.590: 33.3794% ( 82) 00:30:18.887 14652.590 - 14767.064: 35.4397% ( 149) 00:30:18.887 14767.064 - 14881.537: 36.8225% ( 100) 00:30:18.887 14881.537 - 14996.010: 37.9287% ( 80) 00:30:18.887 14996.010 - 15110.484: 39.0348% ( 80) 00:30:18.887 15110.484 - 15224.957: 39.9475% ( 66) 00:30:18.887 15224.957 - 15339.431: 40.6665% ( 52) 00:30:18.887 15339.431 - 15453.904: 41.4961% ( 60) 00:30:18.887 15453.904 - 15568.377: 42.3673% ( 63) 00:30:18.887 15568.377 - 15682.851: 43.2384% ( 63) 00:30:18.887 15682.851 - 15797.324: 44.1510% ( 66) 00:30:18.887 15797.324 - 15911.797: 44.7179% ( 41) 00:30:18.887 15911.797 - 16026.271: 45.1604% ( 32) 00:30:18.887 16026.271 - 16140.744: 45.5337% ( 27) 00:30:18.887 16140.744 - 16255.217: 46.0730% ( 39) 00:30:18.887 16255.217 - 16369.691: 47.0962% ( 74) 00:30:18.887 16369.691 - 16484.164: 48.3684% ( 92) 00:30:18.887 16484.164 - 16598.638: 49.5160% ( 83) 00:30:18.887 16598.638 - 16713.111: 50.8296% ( 95) 00:30:18.887 16713.111 - 16827.584: 51.9912% ( 84) 00:30:18.887 16827.584 - 16942.058: 52.8346% ( 61) 00:30:18.887 16942.058 - 17056.531: 53.4154% ( 42) 00:30:18.887 17056.531 - 17171.004: 53.9685% ( 40) 00:30:18.887 17171.004 - 17285.478: 54.3280% ( 26) 00:30:18.887 17285.478 - 17399.951: 54.7290% ( 29) 00:30:18.887 17399.951 - 17514.424: 55.1023% ( 27) 00:30:18.887 17514.424 - 17628.898: 55.4895% ( 28) 00:30:18.887 17628.898 - 17743.371: 55.8075% ( 23) 00:30:18.887 17743.371 - 17857.845: 56.2915% ( 35) 00:30:18.887 17857.845 - 17972.318: 56.9967% ( 51) 00:30:18.887 17972.318 - 18086.791: 57.5360% ( 39) 00:30:18.887 18086.791 - 18201.265: 58.0061% ( 34) 00:30:18.887 18201.265 - 18315.738: 58.6698% ( 48) 00:30:18.887 18315.738 - 18430.211: 59.5271% ( 62) 00:30:18.887 18430.211 - 18544.685: 60.3567% ( 60) 00:30:18.887 18544.685 - 18659.158: 60.9652% ( 44) 00:30:18.887 18659.158 - 18773.631: 61.4215% ( 33) 00:30:18.887 18773.631 - 18888.105: 61.8639% ( 32) 00:30:18.887 18888.105 - 19002.578: 63.1775% ( 95) 00:30:18.887 19002.578 - 19117.052: 63.6200% ( 32) 00:30:18.887 19117.052 - 19231.525: 64.1316% ( 37) 00:30:18.887 19231.525 - 19345.998: 64.5188% ( 28) 00:30:18.887 19345.998 - 19460.472: 65.0857% ( 41) 00:30:18.887 19460.472 - 19574.945: 65.4176% ( 24) 00:30:18.887 19574.945 - 19689.418: 65.6527% ( 17) 00:30:18.887 19689.418 - 19803.892: 65.8739% ( 16) 00:30:18.887 19803.892 - 19918.365: 66.0537% ( 13) 00:30:18.887 19918.365 - 20032.838: 66.4270% ( 27) 00:30:18.887 20032.838 - 20147.312: 66.6897% ( 19) 00:30:18.887 20147.312 - 20261.785: 66.8556% ( 12) 00:30:18.887 20261.785 - 20376.259: 67.1598% ( 22) 00:30:18.887 20376.259 - 20490.732: 67.3396% ( 13) 00:30:18.887 20490.732 - 20605.205: 67.6853% ( 25) 00:30:18.887 20605.205 - 20719.679: 67.8650% ( 13) 00:30:18.887 20719.679 - 20834.152: 68.0863% ( 16) 00:30:18.887 20834.152 - 20948.625: 68.3075% ( 16) 00:30:18.887 20948.625 - 21063.099: 68.6256% ( 23) 00:30:18.887 21063.099 - 21177.572: 68.9712% ( 25) 00:30:18.887 21177.572 - 21292.045: 69.8562% ( 64) 00:30:18.887 21292.045 - 21406.519: 70.6720% ( 59) 00:30:18.887 21406.519 - 21520.992: 71.5017% ( 60) 00:30:18.887 21520.992 - 21635.466: 72.3866% ( 64) 00:30:18.887 21635.466 - 21749.939: 73.4928% ( 80) 00:30:18.887 21749.939 - 21864.412: 74.7235% ( 89) 00:30:18.887 21864.412 - 21978.886: 76.0509% ( 96) 00:30:18.887 21978.886 - 22093.359: 77.4336% ( 100) 00:30:18.887 22093.359 - 22207.832: 80.3650% ( 212) 00:30:18.887 22207.832 - 22322.306: 82.9784% ( 189) 00:30:18.887 22322.306 - 22436.779: 86.3247% ( 242) 00:30:18.887 22436.779 - 22551.252: 88.0946% ( 128) 00:30:18.887 22551.252 - 22665.726: 89.3805% ( 93) 00:30:18.887 22665.726 - 22780.199: 90.6388% ( 91) 00:30:18.887 22780.199 - 22894.672: 91.9386% ( 94) 00:30:18.887 22894.672 - 23009.146: 92.8512% ( 66) 00:30:18.887 23009.146 - 23123.619: 94.8009% ( 141) 00:30:18.887 23123.619 - 23238.093: 95.4784% ( 49) 00:30:18.887 23238.093 - 23352.566: 96.4463% ( 70) 00:30:18.887 23352.566 - 23467.039: 96.8473% ( 29) 00:30:18.887 23467.039 - 23581.513: 97.1654% ( 23) 00:30:18.887 23581.513 - 23695.986: 97.4419% ( 20) 00:30:18.887 23695.986 - 23810.459: 97.6079% ( 12) 00:30:18.887 23810.459 - 23924.933: 97.7876% ( 13) 00:30:18.887 23924.933 - 24039.406: 97.9674% ( 13) 00:30:18.887 24039.406 - 24153.879: 98.0918% ( 9) 00:30:18.887 24153.879 - 24268.353: 98.1748% ( 6) 00:30:18.887 24268.353 - 24382.826: 98.2163% ( 3) 00:30:18.887 24382.826 - 24497.300: 98.2301% ( 1) 00:30:18.887 31136.755 - 31365.701: 98.2577% ( 2) 00:30:18.887 31365.701 - 31594.648: 98.3545% ( 7) 00:30:18.887 31594.648 - 31823.595: 98.4375% ( 6) 00:30:18.887 31823.595 - 32052.541: 98.5205% ( 6) 00:30:18.887 32052.541 - 32281.488: 98.6173% ( 7) 00:30:18.887 32281.488 - 32510.435: 98.7140% ( 7) 00:30:18.887 32510.435 - 32739.382: 98.7970% ( 6) 00:30:18.887 32739.382 - 32968.328: 98.9076% ( 8) 00:30:18.887 32968.328 - 33197.275: 98.9906% ( 6) 00:30:18.887 33197.275 - 33426.222: 99.0874% ( 7) 00:30:18.887 33426.222 - 33655.169: 99.1150% ( 2) 00:30:18.887 42584.091 - 42813.038: 99.1427% ( 2) 00:30:18.887 42813.038 - 43041.984: 99.2118% ( 5) 00:30:18.887 43041.984 - 43270.931: 99.2671% ( 4) 00:30:18.887 43270.931 - 43499.878: 99.3363% ( 5) 00:30:18.887 43499.878 - 43728.824: 99.3916% ( 4) 00:30:18.887 43728.824 - 43957.771: 99.4607% ( 5) 00:30:18.887 43957.771 - 44186.718: 99.5160% ( 4) 00:30:18.887 44186.718 - 44415.665: 99.5713% ( 4) 00:30:18.887 44415.665 - 44644.611: 99.6405% ( 5) 00:30:18.887 44644.611 - 44873.558: 99.7096% ( 5) 00:30:18.887 44873.558 - 45102.505: 99.7649% ( 4) 00:30:18.887 45102.505 - 45331.452: 99.8341% ( 5) 00:30:18.887 45331.452 - 45560.398: 99.8894% ( 4) 00:30:18.887 45560.398 - 45789.345: 99.9585% ( 5) 00:30:18.887 45789.345 - 46018.292: 100.0000% ( 3) 00:30:18.887 00:30:18.887 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:30:18.887 ============================================================================== 00:30:18.887 Range in us Cumulative IO count 00:30:18.887 10760.496 - 10817.733: 0.0137% ( 1) 00:30:18.887 10874.969 - 10932.206: 0.0822% ( 5) 00:30:18.887 10932.206 - 10989.443: 0.1508% ( 5) 00:30:18.887 10989.443 - 11046.679: 0.2330% ( 6) 00:30:18.887 11046.679 - 11103.916: 0.3427% ( 8) 00:30:18.887 11103.916 - 11161.153: 0.6442% ( 22) 00:30:18.887 11161.153 - 11218.390: 0.8361% ( 14) 00:30:18.887 11218.390 - 11275.626: 1.0554% ( 16) 00:30:18.887 11275.626 - 11332.863: 1.2884% ( 17) 00:30:18.887 11332.863 - 11390.100: 1.6310% ( 25) 00:30:18.887 11390.100 - 11447.336: 1.9600% ( 24) 00:30:18.887 11447.336 - 11504.573: 2.7138% ( 55) 00:30:18.887 11504.573 - 11561.810: 3.1250% ( 30) 00:30:18.887 11561.810 - 11619.046: 3.3580% ( 17) 00:30:18.887 11619.046 - 11676.283: 3.8103% ( 33) 00:30:18.887 11676.283 - 11733.520: 4.3448% ( 39) 00:30:18.887 11733.520 - 11790.756: 5.1398% ( 58) 00:30:18.887 11790.756 - 11847.993: 5.7292% ( 43) 00:30:18.887 11847.993 - 11905.230: 6.5789% ( 62) 00:30:18.887 11905.230 - 11962.466: 7.0724% ( 36) 00:30:18.887 11962.466 - 12019.703: 7.5521% ( 35) 00:30:18.887 12019.703 - 12076.940: 8.1826% ( 46) 00:30:18.887 12076.940 - 12134.176: 8.8268% ( 47) 00:30:18.887 12134.176 - 12191.413: 9.4161% ( 43) 00:30:18.887 12191.413 - 12248.650: 10.1014% ( 50) 00:30:18.887 12248.650 - 12305.886: 10.7182% ( 45) 00:30:18.887 12305.886 - 12363.123: 11.5817% ( 63) 00:30:18.887 12363.123 - 12420.360: 12.0477% ( 34) 00:30:18.887 12420.360 - 12477.597: 12.6919% ( 47) 00:30:18.887 12477.597 - 12534.833: 13.3635% ( 49) 00:30:18.887 12534.833 - 12592.070: 14.0899% ( 53) 00:30:18.887 12592.070 - 12649.307: 14.7067% ( 45) 00:30:18.887 12649.307 - 12706.543: 15.4879% ( 57) 00:30:18.887 12706.543 - 12763.780: 16.0773% ( 43) 00:30:18.887 12763.780 - 12821.017: 16.5159% ( 32) 00:30:18.887 12821.017 - 12878.253: 16.8448% ( 24) 00:30:18.887 12878.253 - 12935.490: 17.2286% ( 28) 00:30:18.887 12935.490 - 12992.727: 17.7357% ( 37) 00:30:18.887 12992.727 - 13049.963: 18.2429% ( 37) 00:30:18.887 13049.963 - 13107.200: 18.6952% ( 33) 00:30:18.887 13107.200 - 13164.437: 19.1064% ( 30) 00:30:18.887 13164.437 - 13221.673: 19.5038% ( 29) 00:30:18.887 13221.673 - 13278.910: 19.9424% ( 32) 00:30:18.887 13278.910 - 13336.147: 20.5044% ( 41) 00:30:18.887 13336.147 - 13393.383: 21.0800% ( 42) 00:30:18.887 13393.383 - 13450.620: 21.7242% ( 47) 00:30:18.887 13450.620 - 13507.857: 22.0943% ( 27) 00:30:18.887 13507.857 - 13565.093: 22.4781% ( 28) 00:30:18.887 13565.093 - 13622.330: 23.0126% ( 39) 00:30:18.887 13622.330 - 13679.567: 23.3827% ( 27) 00:30:18.887 13679.567 - 13736.803: 23.9172% ( 39) 00:30:18.887 13736.803 - 13794.040: 24.6025% ( 50) 00:30:18.887 13794.040 - 13851.277: 25.2330% ( 46) 00:30:18.887 13851.277 - 13908.514: 25.6168% ( 28) 00:30:18.887 13908.514 - 13965.750: 26.0417% ( 31) 00:30:18.887 13965.750 - 14022.987: 26.4117% ( 27) 00:30:18.887 14022.987 - 14080.224: 26.8777% ( 34) 00:30:18.887 14080.224 - 14137.460: 27.3438% ( 34) 00:30:18.887 14137.460 - 14194.697: 27.7412% ( 29) 00:30:18.887 14194.697 - 14251.934: 28.0291% ( 21) 00:30:18.887 14251.934 - 14309.170: 28.3169% ( 21) 00:30:18.887 14309.170 - 14366.407: 29.0433% ( 53) 00:30:18.887 14366.407 - 14423.644: 29.7560% ( 52) 00:30:18.887 14423.644 - 14480.880: 30.4139% ( 48) 00:30:18.887 14480.880 - 14538.117: 30.9896% ( 42) 00:30:18.887 14538.117 - 14595.354: 31.4693% ( 35) 00:30:18.887 14595.354 - 14652.590: 32.3054% ( 61) 00:30:18.887 14652.590 - 14767.064: 33.4704% ( 85) 00:30:18.887 14767.064 - 14881.537: 34.6354% ( 85) 00:30:18.887 14881.537 - 14996.010: 36.0883% ( 106) 00:30:18.887 14996.010 - 15110.484: 37.6645% ( 115) 00:30:18.887 15110.484 - 15224.957: 39.1447% ( 108) 00:30:18.887 15224.957 - 15339.431: 40.3509% ( 88) 00:30:18.887 15339.431 - 15453.904: 41.4474% ( 80) 00:30:18.887 15453.904 - 15568.377: 42.2423% ( 58) 00:30:18.887 15568.377 - 15682.851: 42.8728% ( 46) 00:30:18.887 15682.851 - 15797.324: 43.5581% ( 50) 00:30:18.887 15797.324 - 15911.797: 44.5998% ( 76) 00:30:18.887 15911.797 - 16026.271: 45.3673% ( 56) 00:30:18.887 16026.271 - 16140.744: 45.9430% ( 42) 00:30:18.887 16140.744 - 16255.217: 46.6968% ( 55) 00:30:18.887 16255.217 - 16369.691: 47.3821% ( 50) 00:30:18.887 16369.691 - 16484.164: 48.2045% ( 60) 00:30:18.887 16484.164 - 16598.638: 48.9035% ( 51) 00:30:18.887 16598.638 - 16713.111: 49.6848% ( 57) 00:30:18.887 16713.111 - 16827.584: 50.3564% ( 49) 00:30:18.887 16827.584 - 16942.058: 51.0965% ( 54) 00:30:18.887 16942.058 - 17056.531: 52.2889% ( 87) 00:30:18.887 17056.531 - 17171.004: 53.2072% ( 67) 00:30:18.887 17171.004 - 17285.478: 53.8103% ( 44) 00:30:18.887 17285.478 - 17399.951: 54.4956% ( 50) 00:30:18.887 17399.951 - 17514.424: 55.0164% ( 38) 00:30:18.887 17514.424 - 17628.898: 55.4825% ( 34) 00:30:18.887 17628.898 - 17743.371: 55.9211% ( 32) 00:30:18.887 17743.371 - 17857.845: 56.4693% ( 40) 00:30:18.887 17857.845 - 17972.318: 57.1546% ( 50) 00:30:18.887 17972.318 - 18086.791: 57.9359% ( 57) 00:30:18.887 18086.791 - 18201.265: 58.6349% ( 51) 00:30:18.887 18201.265 - 18315.738: 59.1146% ( 35) 00:30:18.887 18315.738 - 18430.211: 59.5669% ( 33) 00:30:18.887 18430.211 - 18544.685: 60.1837% ( 45) 00:30:18.887 18544.685 - 18659.158: 60.9512% ( 56) 00:30:18.887 18659.158 - 18773.631: 61.3761% ( 31) 00:30:18.887 18773.631 - 18888.105: 62.3355% ( 70) 00:30:18.887 18888.105 - 19002.578: 63.4046% ( 78) 00:30:18.887 19002.578 - 19117.052: 63.8980% ( 36) 00:30:18.887 19117.052 - 19231.525: 64.1447% ( 18) 00:30:18.887 19231.525 - 19345.998: 64.4737% ( 24) 00:30:18.888 19345.998 - 19460.472: 64.7204% ( 18) 00:30:18.888 19460.472 - 19574.945: 64.9260% ( 15) 00:30:18.888 19574.945 - 19689.418: 65.1179% ( 14) 00:30:18.888 19689.418 - 19803.892: 65.3235% ( 15) 00:30:18.888 19803.892 - 19918.365: 65.5839% ( 19) 00:30:18.888 19918.365 - 20032.838: 66.1321% ( 40) 00:30:18.888 20032.838 - 20147.312: 66.6667% ( 39) 00:30:18.888 20147.312 - 20261.785: 67.2012% ( 39) 00:30:18.888 20261.785 - 20376.259: 67.6535% ( 33) 00:30:18.888 20376.259 - 20490.732: 67.8454% ( 14) 00:30:18.888 20490.732 - 20605.205: 67.9550% ( 8) 00:30:18.888 20605.205 - 20719.679: 68.1332% ( 13) 00:30:18.888 20719.679 - 20834.152: 68.2703% ( 10) 00:30:18.888 20834.152 - 20948.625: 68.5718% ( 22) 00:30:18.888 20948.625 - 21063.099: 69.0652% ( 36) 00:30:18.888 21063.099 - 21177.572: 69.4216% ( 26) 00:30:18.888 21177.572 - 21292.045: 69.8054% ( 28) 00:30:18.888 21292.045 - 21406.519: 70.4084% ( 44) 00:30:18.888 21406.519 - 21520.992: 71.2856% ( 64) 00:30:18.888 21520.992 - 21635.466: 72.2588% ( 71) 00:30:18.888 21635.466 - 21749.939: 73.3964% ( 83) 00:30:18.888 21749.939 - 21864.412: 74.5477% ( 84) 00:30:18.888 21864.412 - 21978.886: 75.9594% ( 103) 00:30:18.888 21978.886 - 22093.359: 77.3163% ( 99) 00:30:18.888 22093.359 - 22207.832: 80.4550% ( 229) 00:30:18.888 22207.832 - 22322.306: 83.7582% ( 241) 00:30:18.888 22322.306 - 22436.779: 86.9518% ( 233) 00:30:18.888 22436.779 - 22551.252: 89.5148% ( 187) 00:30:18.888 22551.252 - 22665.726: 90.6113% ( 80) 00:30:18.888 22665.726 - 22780.199: 91.8997% ( 94) 00:30:18.888 22780.199 - 22894.672: 92.8454% ( 69) 00:30:18.888 22894.672 - 23009.146: 93.7226% ( 64) 00:30:18.888 23009.146 - 23123.619: 95.2029% ( 108) 00:30:18.888 23123.619 - 23238.093: 95.9019% ( 51) 00:30:18.888 23238.093 - 23352.566: 96.5186% ( 45) 00:30:18.888 23352.566 - 23467.039: 97.0532% ( 39) 00:30:18.888 23467.039 - 23581.513: 97.4370% ( 28) 00:30:18.888 23581.513 - 23695.986: 97.7385% ( 22) 00:30:18.888 23695.986 - 23810.459: 98.0263% ( 21) 00:30:18.888 23810.459 - 23924.933: 98.2593% ( 17) 00:30:18.888 23924.933 - 24039.406: 98.4649% ( 15) 00:30:18.888 24039.406 - 24153.879: 98.6157% ( 11) 00:30:18.888 24153.879 - 24268.353: 98.7664% ( 11) 00:30:18.888 24268.353 - 24382.826: 98.9172% ( 11) 00:30:18.888 24382.826 - 24497.300: 98.9995% ( 6) 00:30:18.888 24497.300 - 24611.773: 99.0269% ( 2) 00:30:18.888 24611.773 - 24726.246: 99.0543% ( 2) 00:30:18.888 24726.246 - 24840.720: 99.0817% ( 2) 00:30:18.888 24840.720 - 24955.193: 99.1091% ( 2) 00:30:18.888 24955.193 - 25069.666: 99.1228% ( 1) 00:30:18.888 30449.914 - 30678.861: 99.1365% ( 1) 00:30:18.888 30678.861 - 30907.808: 99.2325% ( 7) 00:30:18.888 30907.808 - 31136.755: 99.3147% ( 6) 00:30:18.888 31136.755 - 31365.701: 99.3969% ( 6) 00:30:18.888 31365.701 - 31594.648: 99.5066% ( 8) 00:30:18.888 31594.648 - 31823.595: 99.5888% ( 6) 00:30:18.888 31823.595 - 32052.541: 99.6848% ( 7) 00:30:18.888 32052.541 - 32281.488: 99.7807% ( 7) 00:30:18.888 32281.488 - 32510.435: 99.8766% ( 7) 00:30:18.888 32510.435 - 32739.382: 99.9726% ( 7) 00:30:18.888 32739.382 - 32968.328: 100.0000% ( 2) 00:30:18.888 00:30:18.888 05:41:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:18.888 00:30:18.888 real 0m2.706s 00:30:18.888 user 0m2.278s 00:30:18.888 sys 0m0.318s 00:30:18.888 05:41:38 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:18.888 05:41:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:30:18.888 ************************************ 00:30:18.888 END TEST nvme_perf 00:30:18.888 ************************************ 00:30:18.888 05:41:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:18.888 05:41:38 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:18.888 05:41:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:18.888 05:41:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:18.888 ************************************ 00:30:18.888 START TEST nvme_hello_world 00:30:18.888 ************************************ 00:30:18.888 05:41:38 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:19.157 Initializing NVMe Controllers 00:30:19.157 Attached to 0000:00:10.0 00:30:19.157 Namespace ID: 1 size: 6GB 00:30:19.157 Attached to 0000:00:11.0 00:30:19.157 Namespace ID: 1 size: 5GB 00:30:19.157 Attached to 0000:00:13.0 00:30:19.157 Namespace ID: 1 size: 1GB 00:30:19.157 Attached to 0000:00:12.0 00:30:19.157 Namespace ID: 1 size: 4GB 00:30:19.157 Namespace ID: 2 size: 4GB 00:30:19.157 Namespace ID: 3 size: 4GB 00:30:19.157 Initialization complete. 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 INFO: using host memory buffer for IO 00:30:19.157 Hello world! 00:30:19.157 00:30:19.157 real 0m0.351s 00:30:19.157 user 0m0.128s 00:30:19.157 sys 0m0.167s 00:30:19.157 05:41:38 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:19.157 05:41:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:19.157 ************************************ 00:30:19.157 END TEST nvme_hello_world 00:30:19.157 ************************************ 00:30:19.157 05:41:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:19.157 05:41:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:19.157 05:41:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:19.157 05:41:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.157 ************************************ 00:30:19.157 START TEST nvme_sgl 00:30:19.157 ************************************ 00:30:19.157 05:41:38 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:19.416 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:30:19.416 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:30:19.416 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:30:19.416 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:30:19.416 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:30:19.416 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:30:19.416 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:30:19.416 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:30:19.416 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:30:19.416 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:30:19.417 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:30:19.417 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:30:19.417 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:30:19.417 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:30:19.417 NVMe Readv/Writev Request test 00:30:19.417 Attached to 0000:00:10.0 00:30:19.417 Attached to 0000:00:11.0 00:30:19.417 Attached to 0000:00:13.0 00:30:19.417 Attached to 0000:00:12.0 00:30:19.417 0000:00:10.0: build_io_request_2 test passed 00:30:19.417 0000:00:10.0: build_io_request_4 test passed 00:30:19.417 0000:00:10.0: build_io_request_5 test passed 00:30:19.417 0000:00:10.0: build_io_request_6 test passed 00:30:19.417 0000:00:10.0: build_io_request_7 test passed 00:30:19.417 0000:00:10.0: build_io_request_10 test passed 00:30:19.417 0000:00:11.0: build_io_request_2 test passed 00:30:19.417 0000:00:11.0: build_io_request_4 test passed 00:30:19.417 0000:00:11.0: build_io_request_5 test passed 00:30:19.417 0000:00:11.0: build_io_request_6 test passed 00:30:19.417 0000:00:11.0: build_io_request_7 test passed 00:30:19.417 0000:00:11.0: build_io_request_10 test passed 00:30:19.417 Cleaning up... 00:30:19.675 00:30:19.675 real 0m0.375s 00:30:19.675 user 0m0.185s 00:30:19.675 sys 0m0.140s 00:30:19.675 05:41:39 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:19.675 05:41:39 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:30:19.675 ************************************ 00:30:19.675 END TEST nvme_sgl 00:30:19.675 ************************************ 00:30:19.675 05:41:39 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:19.675 05:41:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:19.675 05:41:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:19.675 05:41:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.675 ************************************ 00:30:19.675 START TEST nvme_e2edp 00:30:19.675 ************************************ 00:30:19.675 05:41:39 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:19.934 NVMe Write/Read with End-to-End data protection test 00:30:19.934 Attached to 0000:00:10.0 00:30:19.934 Attached to 0000:00:11.0 00:30:19.934 Attached to 0000:00:13.0 00:30:19.934 Attached to 0000:00:12.0 00:30:19.934 Cleaning up... 00:30:19.934 00:30:19.934 real 0m0.288s 00:30:19.934 user 0m0.101s 00:30:19.934 sys 0m0.142s 00:30:19.934 05:41:39 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:19.934 05:41:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:30:19.934 ************************************ 00:30:19.934 END TEST nvme_e2edp 00:30:19.934 ************************************ 00:30:19.934 05:41:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:19.934 05:41:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:19.934 05:41:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:19.934 05:41:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:19.934 ************************************ 00:30:19.934 START TEST nvme_reserve 00:30:19.934 ************************************ 00:30:19.934 05:41:39 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:20.193 ===================================================== 00:30:20.193 NVMe Controller at PCI bus 0, device 16, function 0 00:30:20.193 ===================================================== 00:30:20.193 Reservations: Not Supported 00:30:20.193 ===================================================== 00:30:20.193 NVMe Controller at PCI bus 0, device 17, function 0 00:30:20.193 ===================================================== 00:30:20.193 Reservations: Not Supported 00:30:20.193 ===================================================== 00:30:20.193 NVMe Controller at PCI bus 0, device 19, function 0 00:30:20.193 ===================================================== 00:30:20.193 Reservations: Not Supported 00:30:20.193 ===================================================== 00:30:20.193 NVMe Controller at PCI bus 0, device 18, function 0 00:30:20.193 ===================================================== 00:30:20.193 Reservations: Not Supported 00:30:20.193 Reservation test passed 00:30:20.193 00:30:20.193 real 0m0.286s 00:30:20.193 user 0m0.108s 00:30:20.193 sys 0m0.129s 00:30:20.193 05:41:40 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:20.193 05:41:40 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:30:20.193 ************************************ 00:30:20.193 END TEST nvme_reserve 00:30:20.194 ************************************ 00:30:20.194 05:41:40 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:20.194 05:41:40 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:20.194 05:41:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:20.194 05:41:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:20.194 ************************************ 00:30:20.194 START TEST nvme_err_injection 00:30:20.194 ************************************ 00:30:20.194 05:41:40 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:20.762 NVMe Error Injection test 00:30:20.762 Attached to 0000:00:10.0 00:30:20.762 Attached to 0000:00:11.0 00:30:20.762 Attached to 0000:00:13.0 00:30:20.762 Attached to 0000:00:12.0 00:30:20.762 0000:00:10.0: get features failed as expected 00:30:20.762 0000:00:11.0: get features failed as expected 00:30:20.762 0000:00:13.0: get features failed as expected 00:30:20.762 0000:00:12.0: get features failed as expected 00:30:20.762 0000:00:10.0: get features successfully as expected 00:30:20.762 0000:00:11.0: get features successfully as expected 00:30:20.762 0000:00:13.0: get features successfully as expected 00:30:20.762 0000:00:12.0: get features successfully as expected 00:30:20.762 0000:00:10.0: read failed as expected 00:30:20.762 0000:00:13.0: read failed as expected 00:30:20.762 0000:00:11.0: read failed as expected 00:30:20.762 0000:00:12.0: read failed as expected 00:30:20.762 0000:00:11.0: read successfully as expected 00:30:20.762 0000:00:10.0: read successfully as expected 00:30:20.762 0000:00:13.0: read successfully as expected 00:30:20.762 0000:00:12.0: read successfully as expected 00:30:20.762 Cleaning up... 00:30:20.762 00:30:20.762 real 0m0.297s 00:30:20.762 user 0m0.114s 00:30:20.762 sys 0m0.140s 00:30:20.762 05:41:40 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:20.762 05:41:40 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:30:20.762 ************************************ 00:30:20.762 END TEST nvme_err_injection 00:30:20.762 ************************************ 00:30:20.762 05:41:40 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:20.762 05:41:40 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:30:20.762 05:41:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:20.762 05:41:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:20.762 ************************************ 00:30:20.762 START TEST nvme_overhead 00:30:20.762 ************************************ 00:30:20.762 05:41:40 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:22.141 Initializing NVMe Controllers 00:30:22.141 Attached to 0000:00:10.0 00:30:22.141 Attached to 0000:00:11.0 00:30:22.141 Attached to 0000:00:13.0 00:30:22.141 Attached to 0000:00:12.0 00:30:22.141 Initialization complete. Launching workers. 00:30:22.141 submit (in ns) avg, min, max = 14491.0, 11281.2, 52285.6 00:30:22.141 complete (in ns) avg, min, max = 8345.1, 6306.6, 41669.0 00:30:22.141 00:30:22.141 Submit histogram 00:30:22.141 ================ 00:30:22.141 Range in us Cumulative Count 00:30:22.141 11.235 - 11.291: 0.0120% ( 1) 00:30:22.141 11.459 - 11.514: 0.0241% ( 1) 00:30:22.141 11.514 - 11.570: 0.0482% ( 2) 00:30:22.141 11.570 - 11.626: 0.0964% ( 4) 00:30:22.141 11.626 - 11.682: 0.1927% ( 8) 00:30:22.141 11.682 - 11.738: 0.2771% ( 7) 00:30:22.141 11.738 - 11.794: 0.3975% ( 10) 00:30:22.141 11.794 - 11.850: 0.5662% ( 14) 00:30:22.141 11.850 - 11.906: 0.6987% ( 11) 00:30:22.141 11.906 - 11.962: 0.8312% ( 11) 00:30:22.141 11.962 - 12.017: 1.0119% ( 15) 00:30:22.141 12.017 - 12.073: 1.1083% ( 8) 00:30:22.141 12.073 - 12.129: 1.2167% ( 9) 00:30:22.141 12.129 - 12.185: 1.4336% ( 18) 00:30:22.141 12.185 - 12.241: 1.6625% ( 19) 00:30:22.141 12.241 - 12.297: 1.8432% ( 15) 00:30:22.141 12.297 - 12.353: 2.0118% ( 14) 00:30:22.141 12.353 - 12.409: 2.2286% ( 18) 00:30:22.141 12.409 - 12.465: 2.5660% ( 28) 00:30:22.141 12.465 - 12.521: 2.9153% ( 29) 00:30:22.141 12.521 - 12.576: 3.2285% ( 26) 00:30:22.141 12.576 - 12.632: 3.6381% ( 34) 00:30:22.141 12.632 - 12.688: 4.2405% ( 50) 00:30:22.141 12.688 - 12.744: 4.8548% ( 51) 00:30:22.141 12.744 - 12.800: 5.8427% ( 82) 00:30:22.141 12.800 - 12.856: 6.8787% ( 86) 00:30:22.141 12.856 - 12.912: 8.0713% ( 99) 00:30:22.141 12.912 - 12.968: 9.5892% ( 126) 00:30:22.141 12.968 - 13.024: 11.0830% ( 124) 00:30:22.141 13.024 - 13.079: 12.8177% ( 144) 00:30:22.141 13.079 - 13.135: 14.6007% ( 148) 00:30:22.141 13.135 - 13.191: 16.8654% ( 188) 00:30:22.141 13.191 - 13.247: 19.0941% ( 185) 00:30:22.141 13.247 - 13.303: 21.3107% ( 184) 00:30:22.141 13.303 - 13.359: 23.7080% ( 199) 00:30:22.141 13.359 - 13.415: 25.9125% ( 183) 00:30:22.141 13.415 - 13.471: 28.0569% ( 178) 00:30:22.141 13.471 - 13.527: 30.0325% ( 164) 00:30:22.141 13.527 - 13.583: 32.0925% ( 171) 00:30:22.141 13.583 - 13.638: 33.9839% ( 157) 00:30:22.141 13.638 - 13.694: 36.2727% ( 190) 00:30:22.141 13.694 - 13.750: 38.6339% ( 196) 00:30:22.141 13.750 - 13.806: 40.4891% ( 154) 00:30:22.141 13.806 - 13.862: 42.9948% ( 208) 00:30:22.141 13.862 - 13.918: 45.6812% ( 223) 00:30:22.141 13.918 - 13.974: 48.0304% ( 195) 00:30:22.141 13.974 - 14.030: 50.3313% ( 191) 00:30:22.141 14.030 - 14.086: 52.7647% ( 202) 00:30:22.141 14.086 - 14.141: 55.1861% ( 201) 00:30:22.141 14.141 - 14.197: 57.9328% ( 228) 00:30:22.141 14.197 - 14.253: 60.4385% ( 208) 00:30:22.141 14.253 - 14.309: 62.8599% ( 201) 00:30:22.141 14.309 - 14.421: 67.4136% ( 378) 00:30:22.141 14.421 - 14.533: 71.5095% ( 340) 00:30:22.141 14.533 - 14.645: 74.9789% ( 288) 00:30:22.141 14.645 - 14.756: 77.8581% ( 239) 00:30:22.141 14.756 - 14.868: 80.5807% ( 226) 00:30:22.141 14.868 - 14.980: 82.9659% ( 198) 00:30:22.141 14.980 - 15.092: 85.0139% ( 170) 00:30:22.141 15.092 - 15.203: 86.8088% ( 149) 00:30:22.141 15.203 - 15.315: 88.0496% ( 103) 00:30:22.141 15.315 - 15.427: 89.1338% ( 90) 00:30:22.141 15.427 - 15.539: 89.9530% ( 68) 00:30:22.141 15.539 - 15.651: 90.5192% ( 47) 00:30:22.141 15.651 - 15.762: 90.9409% ( 35) 00:30:22.141 15.762 - 15.874: 91.2300% ( 24) 00:30:22.141 15.874 - 15.986: 91.5552% ( 27) 00:30:22.141 15.986 - 16.098: 91.8805% ( 27) 00:30:22.141 16.098 - 16.210: 92.2901% ( 34) 00:30:22.141 16.210 - 16.321: 92.6635% ( 31) 00:30:22.141 16.321 - 16.433: 92.9647% ( 25) 00:30:22.141 16.433 - 16.545: 93.2779% ( 26) 00:30:22.141 16.545 - 16.657: 93.5309% ( 21) 00:30:22.141 16.657 - 16.769: 93.7839% ( 21) 00:30:22.141 16.769 - 16.880: 93.9766% ( 16) 00:30:22.141 16.880 - 16.992: 94.1573% ( 15) 00:30:22.141 16.992 - 17.104: 94.2417% ( 7) 00:30:22.141 17.104 - 17.216: 94.3501% ( 9) 00:30:22.141 17.216 - 17.328: 94.4705% ( 10) 00:30:22.141 17.328 - 17.439: 94.5187% ( 4) 00:30:22.141 17.439 - 17.551: 94.5428% ( 2) 00:30:22.141 17.551 - 17.663: 94.5549% ( 1) 00:30:22.141 17.663 - 17.775: 94.6031% ( 4) 00:30:22.141 17.775 - 17.886: 94.6272% ( 2) 00:30:22.141 17.886 - 17.998: 94.6512% ( 2) 00:30:22.141 17.998 - 18.110: 94.6753% ( 2) 00:30:22.141 18.110 - 18.222: 94.7115% ( 3) 00:30:22.141 18.222 - 18.334: 94.7235% ( 1) 00:30:22.141 18.445 - 18.557: 94.7356% ( 1) 00:30:22.141 18.557 - 18.669: 94.7597% ( 2) 00:30:22.141 18.669 - 18.781: 94.7717% ( 1) 00:30:22.141 18.781 - 18.893: 94.8199% ( 4) 00:30:22.141 18.893 - 19.004: 94.8801% ( 5) 00:30:22.141 19.004 - 19.116: 95.0247% ( 12) 00:30:22.141 19.116 - 19.228: 95.0729% ( 4) 00:30:22.141 19.228 - 19.340: 95.1211% ( 4) 00:30:22.141 19.340 - 19.452: 95.2295% ( 9) 00:30:22.141 19.452 - 19.563: 95.2536% ( 2) 00:30:22.141 19.563 - 19.675: 95.3259% ( 6) 00:30:22.141 19.675 - 19.787: 95.4584% ( 11) 00:30:22.141 19.787 - 19.899: 95.6270% ( 14) 00:30:22.141 19.899 - 20.010: 95.7595% ( 11) 00:30:22.141 20.010 - 20.122: 95.8559% ( 8) 00:30:22.141 20.122 - 20.234: 95.9643% ( 9) 00:30:22.141 20.234 - 20.346: 96.0728% ( 9) 00:30:22.141 20.346 - 20.458: 96.1691% ( 8) 00:30:22.141 20.458 - 20.569: 96.2294% ( 5) 00:30:22.141 20.569 - 20.681: 96.3257% ( 8) 00:30:22.141 20.681 - 20.793: 96.5185% ( 16) 00:30:22.141 20.793 - 20.905: 96.6510% ( 11) 00:30:22.141 20.905 - 21.017: 96.7233% ( 6) 00:30:22.141 21.017 - 21.128: 96.8438% ( 10) 00:30:22.141 21.128 - 21.240: 96.9401% ( 8) 00:30:22.141 21.240 - 21.352: 97.0245% ( 7) 00:30:22.141 21.352 - 21.464: 97.1570% ( 11) 00:30:22.141 21.464 - 21.576: 97.2533% ( 8) 00:30:22.141 21.576 - 21.687: 97.2895% ( 3) 00:30:22.141 21.687 - 21.799: 97.3738% ( 7) 00:30:22.141 21.799 - 21.911: 97.4581% ( 7) 00:30:22.141 21.911 - 22.023: 97.5666% ( 9) 00:30:22.141 22.023 - 22.134: 97.6268% ( 5) 00:30:22.141 22.134 - 22.246: 97.6870% ( 5) 00:30:22.141 22.246 - 22.358: 97.7232% ( 3) 00:30:22.141 22.358 - 22.470: 97.7714% ( 4) 00:30:22.141 22.470 - 22.582: 97.8436% ( 6) 00:30:22.141 22.582 - 22.693: 97.9159% ( 6) 00:30:22.141 22.693 - 22.805: 97.9521% ( 3) 00:30:22.141 22.805 - 22.917: 98.0123% ( 5) 00:30:22.141 22.917 - 23.029: 98.0725% ( 5) 00:30:22.141 23.029 - 23.141: 98.0846% ( 1) 00:30:22.141 23.141 - 23.252: 98.1207% ( 3) 00:30:22.141 23.252 - 23.364: 98.1448% ( 2) 00:30:22.141 23.364 - 23.476: 98.2050% ( 5) 00:30:22.141 23.476 - 23.588: 98.2171% ( 1) 00:30:22.141 23.588 - 23.700: 98.2773% ( 5) 00:30:22.141 23.700 - 23.811: 98.3014% ( 2) 00:30:22.141 23.923 - 24.035: 98.3496% ( 4) 00:30:22.141 24.035 - 24.147: 98.3737% ( 2) 00:30:22.141 24.147 - 24.259: 98.4219% ( 4) 00:30:22.141 24.259 - 24.370: 98.4821% ( 5) 00:30:22.141 24.370 - 24.482: 98.5183% ( 3) 00:30:22.141 24.482 - 24.594: 98.5664% ( 4) 00:30:22.141 24.594 - 24.706: 98.6026% ( 3) 00:30:22.142 24.706 - 24.817: 98.6146% ( 1) 00:30:22.142 24.817 - 24.929: 98.6387% ( 2) 00:30:22.142 24.929 - 25.041: 98.6990% ( 5) 00:30:22.142 25.041 - 25.153: 98.7712% ( 6) 00:30:22.142 25.153 - 25.265: 98.8315% ( 5) 00:30:22.142 25.265 - 25.376: 98.8676% ( 3) 00:30:22.142 25.376 - 25.488: 98.9278% ( 5) 00:30:22.142 25.488 - 25.600: 98.9881% ( 5) 00:30:22.142 25.600 - 25.712: 99.0122% ( 2) 00:30:22.142 25.712 - 25.824: 99.0483% ( 3) 00:30:22.142 25.824 - 25.935: 99.0724% ( 2) 00:30:22.142 25.935 - 26.047: 99.1085% ( 3) 00:30:22.142 26.047 - 26.159: 99.1326% ( 2) 00:30:22.142 26.159 - 26.271: 99.1688% ( 3) 00:30:22.142 26.271 - 26.383: 99.1808% ( 1) 00:30:22.142 26.383 - 26.494: 99.2049% ( 2) 00:30:22.142 26.494 - 26.606: 99.2531% ( 4) 00:30:22.142 26.606 - 26.718: 99.2772% ( 2) 00:30:22.142 26.718 - 26.830: 99.3133% ( 3) 00:30:22.142 26.830 - 26.941: 99.3374% ( 2) 00:30:22.142 26.941 - 27.053: 99.3615% ( 2) 00:30:22.142 27.165 - 27.277: 99.3736% ( 1) 00:30:22.142 27.389 - 27.500: 99.3856% ( 1) 00:30:22.142 27.500 - 27.612: 99.4097% ( 2) 00:30:22.142 27.836 - 27.948: 99.4218% ( 1) 00:30:22.142 28.059 - 28.171: 99.4338% ( 1) 00:30:22.142 28.283 - 28.395: 99.4458% ( 1) 00:30:22.142 28.618 - 28.842: 99.4579% ( 1) 00:30:22.142 28.842 - 29.066: 99.4699% ( 1) 00:30:22.142 29.513 - 29.736: 99.5181% ( 4) 00:30:22.142 29.736 - 29.960: 99.5302% ( 1) 00:30:22.142 30.183 - 30.407: 99.5663% ( 3) 00:30:22.142 30.407 - 30.631: 99.6145% ( 4) 00:30:22.142 30.631 - 30.854: 99.6266% ( 1) 00:30:22.142 30.854 - 31.078: 99.6506% ( 2) 00:30:22.142 31.078 - 31.301: 99.6868% ( 3) 00:30:22.142 31.301 - 31.525: 99.7109% ( 2) 00:30:22.142 31.525 - 31.748: 99.7470% ( 3) 00:30:22.142 31.748 - 31.972: 99.7591% ( 1) 00:30:22.142 31.972 - 32.196: 99.7952% ( 3) 00:30:22.142 32.196 - 32.419: 99.8073% ( 1) 00:30:22.142 32.419 - 32.643: 99.8313% ( 2) 00:30:22.142 32.866 - 33.090: 99.8554% ( 2) 00:30:22.142 33.314 - 33.537: 99.8675% ( 1) 00:30:22.142 33.761 - 33.984: 99.8795% ( 1) 00:30:22.142 33.984 - 34.208: 99.8916% ( 1) 00:30:22.142 34.208 - 34.431: 99.9036% ( 1) 00:30:22.142 34.431 - 34.655: 99.9157% ( 1) 00:30:22.142 34.655 - 34.879: 99.9277% ( 1) 00:30:22.142 35.549 - 35.773: 99.9398% ( 1) 00:30:22.142 38.232 - 38.456: 99.9518% ( 1) 00:30:22.142 38.679 - 38.903: 99.9639% ( 1) 00:30:22.142 38.903 - 39.127: 99.9759% ( 1) 00:30:22.142 44.493 - 44.716: 99.9880% ( 1) 00:30:22.142 52.094 - 52.318: 100.0000% ( 1) 00:30:22.142 00:30:22.142 Complete histogram 00:30:22.142 ================== 00:30:22.142 Range in us Cumulative Count 00:30:22.142 6.288 - 6.316: 0.0120% ( 1) 00:30:22.142 6.316 - 6.344: 0.0241% ( 1) 00:30:22.142 6.344 - 6.372: 0.0482% ( 2) 00:30:22.142 6.372 - 6.400: 0.0602% ( 1) 00:30:22.142 6.400 - 6.428: 0.0964% ( 3) 00:30:22.142 6.428 - 6.456: 0.1205% ( 2) 00:30:22.142 6.456 - 6.484: 0.1446% ( 2) 00:30:22.142 6.484 - 6.512: 0.1927% ( 4) 00:30:22.142 6.512 - 6.540: 0.2891% ( 8) 00:30:22.142 6.540 - 6.568: 0.3734% ( 7) 00:30:22.142 6.568 - 6.596: 0.4939% ( 10) 00:30:22.142 6.596 - 6.624: 0.6867% ( 16) 00:30:22.142 6.624 - 6.652: 0.9035% ( 18) 00:30:22.142 6.652 - 6.679: 1.1083% ( 17) 00:30:22.142 6.679 - 6.707: 1.4456% ( 28) 00:30:22.142 6.707 - 6.735: 1.8672% ( 35) 00:30:22.142 6.735 - 6.763: 2.4214% ( 46) 00:30:22.142 6.763 - 6.791: 2.9755% ( 46) 00:30:22.142 6.791 - 6.819: 3.6743% ( 58) 00:30:22.142 6.819 - 6.847: 4.5175% ( 70) 00:30:22.142 6.847 - 6.875: 5.2765% ( 63) 00:30:22.142 6.875 - 6.903: 6.1197% ( 70) 00:30:22.142 6.903 - 6.931: 6.9389% ( 68) 00:30:22.142 6.931 - 6.959: 7.7340% ( 66) 00:30:22.142 6.959 - 6.987: 8.4930% ( 63) 00:30:22.142 6.987 - 7.015: 9.1917% ( 58) 00:30:22.142 7.015 - 7.043: 9.8904% ( 58) 00:30:22.142 7.043 - 7.071: 10.4807% ( 49) 00:30:22.142 7.071 - 7.099: 11.0589% ( 48) 00:30:22.142 7.099 - 7.127: 11.7215% ( 55) 00:30:22.142 7.127 - 7.155: 12.4804% ( 63) 00:30:22.142 7.155 - 7.210: 14.1549% ( 139) 00:30:22.142 7.210 - 7.266: 16.1306% ( 164) 00:30:22.142 7.266 - 7.322: 17.9135% ( 148) 00:30:22.142 7.322 - 7.378: 20.2747% ( 196) 00:30:22.142 7.378 - 7.434: 22.3226% ( 170) 00:30:22.142 7.434 - 7.490: 24.6115% ( 190) 00:30:22.142 7.490 - 7.546: 27.1052% ( 207) 00:30:22.142 7.546 - 7.602: 29.7555% ( 220) 00:30:22.142 7.602 - 7.658: 32.4539% ( 224) 00:30:22.142 7.658 - 7.714: 34.9717% ( 209) 00:30:22.142 7.714 - 7.769: 37.5497% ( 214) 00:30:22.142 7.769 - 7.825: 40.3445% ( 232) 00:30:22.142 7.825 - 7.881: 42.9707% ( 218) 00:30:22.142 7.881 - 7.937: 46.1872% ( 267) 00:30:22.142 7.937 - 7.993: 49.6205% ( 285) 00:30:22.142 7.993 - 8.049: 52.9936% ( 280) 00:30:22.142 8.049 - 8.105: 56.3185% ( 276) 00:30:22.142 8.105 - 8.161: 59.8362% ( 292) 00:30:22.142 8.161 - 8.217: 62.7876% ( 245) 00:30:22.142 8.217 - 8.272: 65.1367% ( 195) 00:30:22.142 8.272 - 8.328: 67.1967% ( 171) 00:30:22.142 8.328 - 8.384: 68.9194% ( 143) 00:30:22.142 8.384 - 8.440: 70.4975% ( 131) 00:30:22.142 8.440 - 8.496: 71.9190% ( 118) 00:30:22.142 8.496 - 8.552: 73.5213% ( 133) 00:30:22.142 8.552 - 8.608: 75.2921% ( 147) 00:30:22.142 8.608 - 8.664: 76.7859% ( 124) 00:30:22.142 8.664 - 8.720: 78.2436% ( 121) 00:30:22.142 8.720 - 8.776: 79.5808% ( 111) 00:30:22.142 8.776 - 8.831: 81.0143% ( 119) 00:30:22.142 8.831 - 8.887: 82.3154% ( 108) 00:30:22.142 8.887 - 8.943: 83.4116% ( 91) 00:30:22.142 8.943 - 8.999: 84.3874% ( 81) 00:30:22.142 8.999 - 9.055: 85.3632% ( 81) 00:30:22.142 9.055 - 9.111: 85.9535% ( 49) 00:30:22.142 9.111 - 9.167: 86.5438% ( 49) 00:30:22.142 9.167 - 9.223: 86.9895% ( 37) 00:30:22.142 9.223 - 9.279: 87.6400% ( 54) 00:30:22.142 9.279 - 9.334: 88.3508% ( 59) 00:30:22.142 9.334 - 9.390: 88.8809% ( 44) 00:30:22.142 9.390 - 9.446: 89.3868% ( 42) 00:30:22.142 9.446 - 9.502: 89.8446% ( 38) 00:30:22.142 9.502 - 9.558: 90.0976% ( 21) 00:30:22.142 9.558 - 9.614: 90.4710% ( 31) 00:30:22.142 9.614 - 9.670: 91.0493% ( 48) 00:30:22.142 9.670 - 9.726: 91.7239% ( 56) 00:30:22.142 9.726 - 9.782: 92.2299% ( 42) 00:30:22.142 9.782 - 9.838: 92.8683% ( 53) 00:30:22.142 9.838 - 9.893: 93.3743% ( 42) 00:30:22.142 9.893 - 9.949: 93.6634% ( 24) 00:30:22.142 9.949 - 10.005: 93.9164% ( 21) 00:30:22.142 10.005 - 10.061: 94.1332% ( 18) 00:30:22.142 10.061 - 10.117: 94.4344% ( 25) 00:30:22.142 10.117 - 10.173: 94.6874% ( 21) 00:30:22.142 10.173 - 10.229: 94.9042% ( 18) 00:30:22.142 10.229 - 10.285: 95.1331% ( 19) 00:30:22.142 10.285 - 10.341: 95.2656% ( 11) 00:30:22.142 10.341 - 10.397: 95.5909% ( 27) 00:30:22.142 10.397 - 10.452: 95.8077% ( 18) 00:30:22.142 10.452 - 10.508: 96.0005% ( 16) 00:30:22.142 10.508 - 10.564: 96.1932% ( 16) 00:30:22.142 10.564 - 10.620: 96.3257% ( 11) 00:30:22.142 10.620 - 10.676: 96.3860% ( 5) 00:30:22.142 10.676 - 10.732: 96.5185% ( 11) 00:30:22.142 10.732 - 10.788: 96.5667% ( 4) 00:30:22.142 10.788 - 10.844: 96.6871% ( 10) 00:30:22.142 10.844 - 10.900: 96.7594% ( 6) 00:30:22.142 10.900 - 10.955: 96.8076% ( 4) 00:30:22.142 10.955 - 11.011: 96.9040% ( 8) 00:30:22.142 11.011 - 11.067: 96.9522% ( 4) 00:30:22.142 11.067 - 11.123: 97.0124% ( 5) 00:30:22.142 11.123 - 11.179: 97.0967% ( 7) 00:30:22.142 11.179 - 11.235: 97.1329% ( 3) 00:30:22.142 11.347 - 11.403: 97.1931% ( 5) 00:30:22.142 11.403 - 11.459: 97.2292% ( 3) 00:30:22.142 11.459 - 11.514: 97.2533% ( 2) 00:30:22.142 11.514 - 11.570: 97.2654% ( 1) 00:30:22.142 11.626 - 11.682: 97.2895% ( 2) 00:30:22.142 11.682 - 11.738: 97.3136% ( 2) 00:30:22.142 11.794 - 11.850: 97.3256% ( 1) 00:30:22.142 11.850 - 11.906: 97.3377% ( 1) 00:30:22.142 12.017 - 12.073: 97.3497% ( 1) 00:30:22.142 12.073 - 12.129: 97.3618% ( 1) 00:30:22.142 12.185 - 12.241: 97.3859% ( 2) 00:30:22.142 12.297 - 12.353: 97.3979% ( 1) 00:30:22.142 12.353 - 12.409: 97.4220% ( 2) 00:30:22.142 12.465 - 12.521: 97.4340% ( 1) 00:30:22.142 12.521 - 12.576: 97.4461% ( 1) 00:30:22.142 12.576 - 12.632: 97.4702% ( 2) 00:30:22.142 12.688 - 12.744: 97.4943% ( 2) 00:30:22.142 12.744 - 12.800: 97.5063% ( 1) 00:30:22.142 12.856 - 12.912: 97.5304% ( 2) 00:30:22.142 13.024 - 13.079: 97.5545% ( 2) 00:30:22.142 13.079 - 13.135: 97.5666% ( 1) 00:30:22.142 13.135 - 13.191: 97.5786% ( 1) 00:30:22.142 13.191 - 13.247: 97.5907% ( 1) 00:30:22.143 13.359 - 13.415: 97.6027% ( 1) 00:30:22.143 13.471 - 13.527: 97.6388% ( 3) 00:30:22.143 13.527 - 13.583: 97.6509% ( 1) 00:30:22.143 13.583 - 13.638: 97.6750% ( 2) 00:30:22.143 13.638 - 13.694: 97.6870% ( 1) 00:30:22.143 13.694 - 13.750: 97.7232% ( 3) 00:30:22.143 13.806 - 13.862: 97.7352% ( 1) 00:30:22.143 13.918 - 13.974: 97.7834% ( 4) 00:30:22.143 13.974 - 14.030: 97.8075% ( 2) 00:30:22.143 14.030 - 14.086: 97.8316% ( 2) 00:30:22.143 14.141 - 14.197: 97.8436% ( 1) 00:30:22.143 14.309 - 14.421: 97.8798% ( 3) 00:30:22.143 14.421 - 14.533: 97.9280% ( 4) 00:30:22.143 14.533 - 14.645: 97.9641% ( 3) 00:30:22.143 14.645 - 14.756: 98.0243% ( 5) 00:30:22.143 14.756 - 14.868: 98.0605% ( 3) 00:30:22.143 14.868 - 14.980: 98.0725% ( 1) 00:30:22.143 14.980 - 15.092: 98.0846% ( 1) 00:30:22.143 15.092 - 15.203: 98.1448% ( 5) 00:30:22.143 15.203 - 15.315: 98.1930% ( 4) 00:30:22.143 15.315 - 15.427: 98.2532% ( 5) 00:30:22.143 15.427 - 15.539: 98.2894% ( 3) 00:30:22.143 15.539 - 15.651: 98.3375% ( 4) 00:30:22.143 15.651 - 15.762: 98.3737% ( 3) 00:30:22.143 15.762 - 15.874: 98.3978% ( 2) 00:30:22.143 15.874 - 15.986: 98.4339% ( 3) 00:30:22.143 15.986 - 16.098: 98.4580% ( 2) 00:30:22.143 16.098 - 16.210: 98.4821% ( 2) 00:30:22.143 16.210 - 16.321: 98.5303% ( 4) 00:30:22.143 16.433 - 16.545: 98.5664% ( 3) 00:30:22.143 16.545 - 16.657: 98.5785% ( 1) 00:30:22.143 16.657 - 16.769: 98.5905% ( 1) 00:30:22.143 16.880 - 16.992: 98.6146% ( 2) 00:30:22.143 16.992 - 17.104: 98.6267% ( 1) 00:30:22.143 17.104 - 17.216: 98.6387% ( 1) 00:30:22.143 17.216 - 17.328: 98.6628% ( 2) 00:30:22.143 17.439 - 17.551: 98.6869% ( 2) 00:30:22.143 18.222 - 18.334: 98.7110% ( 2) 00:30:22.143 18.334 - 18.445: 98.7351% ( 2) 00:30:22.143 18.445 - 18.557: 98.7712% ( 3) 00:30:22.143 18.557 - 18.669: 98.7953% ( 2) 00:30:22.143 18.669 - 18.781: 98.8676% ( 6) 00:30:22.143 18.781 - 18.893: 98.9278% ( 5) 00:30:22.143 18.893 - 19.004: 98.9640% ( 3) 00:30:22.143 19.004 - 19.116: 99.0242% ( 5) 00:30:22.143 19.116 - 19.228: 99.0724% ( 4) 00:30:22.143 19.228 - 19.340: 99.1326% ( 5) 00:30:22.143 19.340 - 19.452: 99.1688% ( 3) 00:30:22.143 19.452 - 19.563: 99.1929% ( 2) 00:30:22.143 19.563 - 19.675: 99.2411% ( 4) 00:30:22.143 19.675 - 19.787: 99.2531% ( 1) 00:30:22.143 19.787 - 19.899: 99.3013% ( 4) 00:30:22.143 19.899 - 20.010: 99.3254% ( 2) 00:30:22.143 20.010 - 20.122: 99.3977% ( 6) 00:30:22.143 20.122 - 20.234: 99.4218% ( 2) 00:30:22.143 20.234 - 20.346: 99.4338% ( 1) 00:30:22.143 20.346 - 20.458: 99.4579% ( 2) 00:30:22.143 20.458 - 20.569: 99.4820% ( 2) 00:30:22.143 20.569 - 20.681: 99.5181% ( 3) 00:30:22.143 20.681 - 20.793: 99.5784% ( 5) 00:30:22.143 20.793 - 20.905: 99.6025% ( 2) 00:30:22.143 20.905 - 21.017: 99.6145% ( 1) 00:30:22.143 21.017 - 21.128: 99.6506% ( 3) 00:30:22.143 21.128 - 21.240: 99.6627% ( 1) 00:30:22.143 21.352 - 21.464: 99.6747% ( 1) 00:30:22.143 21.464 - 21.576: 99.6868% ( 1) 00:30:22.143 21.576 - 21.687: 99.6988% ( 1) 00:30:22.143 21.799 - 21.911: 99.7109% ( 1) 00:30:22.143 21.911 - 22.023: 99.7229% ( 1) 00:30:22.143 23.141 - 23.252: 99.7350% ( 1) 00:30:22.143 24.594 - 24.706: 99.7470% ( 1) 00:30:22.143 24.706 - 24.817: 99.7591% ( 1) 00:30:22.143 24.817 - 24.929: 99.7711% ( 1) 00:30:22.143 25.041 - 25.153: 99.7832% ( 1) 00:30:22.143 25.153 - 25.265: 99.7952% ( 1) 00:30:22.143 25.376 - 25.488: 99.8193% ( 2) 00:30:22.143 25.488 - 25.600: 99.8313% ( 1) 00:30:22.143 25.600 - 25.712: 99.8434% ( 1) 00:30:22.143 25.712 - 25.824: 99.8795% ( 3) 00:30:22.143 25.935 - 26.047: 99.8916% ( 1) 00:30:22.143 26.047 - 26.159: 99.9157% ( 2) 00:30:22.143 26.159 - 26.271: 99.9277% ( 1) 00:30:22.143 26.383 - 26.494: 99.9398% ( 1) 00:30:22.143 27.500 - 27.612: 99.9518% ( 1) 00:30:22.143 33.537 - 33.761: 99.9639% ( 1) 00:30:22.143 37.338 - 37.562: 99.9759% ( 1) 00:30:22.143 41.139 - 41.362: 99.9880% ( 1) 00:30:22.143 41.586 - 41.810: 100.0000% ( 1) 00:30:22.143 00:30:22.143 00:30:22.143 real 0m1.305s 00:30:22.143 user 0m1.098s 00:30:22.143 sys 0m0.153s 00:30:22.143 05:41:41 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:22.143 05:41:41 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:30:22.143 ************************************ 00:30:22.143 END TEST nvme_overhead 00:30:22.143 ************************************ 00:30:22.143 05:41:41 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:22.143 05:41:41 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:30:22.143 05:41:41 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:22.143 05:41:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:22.143 ************************************ 00:30:22.143 START TEST nvme_arbitration 00:30:22.143 ************************************ 00:30:22.143 05:41:41 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:25.435 Initializing NVMe Controllers 00:30:25.435 Attached to 0000:00:10.0 00:30:25.435 Attached to 0000:00:11.0 00:30:25.435 Attached to 0000:00:13.0 00:30:25.435 Attached to 0000:00:12.0 00:30:25.435 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:25.435 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:30:25.435 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:30:25.435 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:30:25.435 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:30:25.435 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:30:25.435 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:25.435 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:25.435 Initialization complete. Launching workers. 00:30:25.435 Starting thread on core 1 with urgent priority queue 00:30:25.435 Starting thread on core 2 with urgent priority queue 00:30:25.435 Starting thread on core 3 with urgent priority queue 00:30:25.435 Starting thread on core 0 with urgent priority queue 00:30:25.435 QEMU NVMe Ctrl (12340 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:30:25.435 QEMU NVMe Ctrl (12342 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:30:25.435 QEMU NVMe Ctrl (12341 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:30:25.435 QEMU NVMe Ctrl (12342 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:30:25.435 QEMU NVMe Ctrl (12343 ) core 2: 448.00 IO/s 223.21 secs/100000 ios 00:30:25.435 QEMU NVMe Ctrl (12342 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:30:25.435 ======================================================== 00:30:25.435 00:30:25.435 00:30:25.435 real 0m3.425s 00:30:25.435 user 0m9.402s 00:30:25.435 sys 0m0.173s 00:30:25.435 05:41:45 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:25.435 05:41:45 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:30:25.435 ************************************ 00:30:25.435 END TEST nvme_arbitration 00:30:25.435 ************************************ 00:30:25.435 05:41:45 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:25.435 05:41:45 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:30:25.435 05:41:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:25.435 05:41:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:25.435 ************************************ 00:30:25.435 START TEST nvme_single_aen 00:30:25.435 ************************************ 00:30:25.435 05:41:45 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:26.002 Asynchronous Event Request test 00:30:26.002 Attached to 0000:00:10.0 00:30:26.002 Attached to 0000:00:11.0 00:30:26.002 Attached to 0000:00:13.0 00:30:26.002 Attached to 0000:00:12.0 00:30:26.002 Reset controller to setup AER completions for this process 00:30:26.002 Registering asynchronous event callbacks... 00:30:26.002 Getting orig temperature thresholds of all controllers 00:30:26.002 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.002 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.002 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.002 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:26.002 Setting all controllers temperature threshold low to trigger AER 00:30:26.002 Waiting for all controllers temperature threshold to be set lower 00:30:26.002 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.002 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:30:26.002 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.002 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:30:26.002 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.002 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:30:26.002 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:26.002 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:30:26.002 Waiting for all controllers to trigger AER and reset threshold 00:30:26.002 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.002 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.002 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.002 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:26.002 Cleaning up... 00:30:26.002 00:30:26.002 real 0m0.312s 00:30:26.002 user 0m0.118s 00:30:26.002 sys 0m0.146s 00:30:26.002 05:41:45 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:26.002 05:41:45 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:30:26.002 ************************************ 00:30:26.002 END TEST nvme_single_aen 00:30:26.002 ************************************ 00:30:26.002 05:41:45 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:26.002 05:41:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:26.002 05:41:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:26.002 05:41:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:30:26.002 ************************************ 00:30:26.002 START TEST nvme_doorbell_aers 00:30:26.002 ************************************ 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:26.002 05:41:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:30:26.261 [2024-11-20 05:41:46.099689] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:30:36.236 Executing: test_write_invalid_db 00:30:36.236 Waiting for AER completion... 00:30:36.236 Failure: test_write_invalid_db 00:30:36.236 00:30:36.236 Executing: test_invalid_db_write_overflow_sq 00:30:36.236 Waiting for AER completion... 00:30:36.236 Failure: test_invalid_db_write_overflow_sq 00:30:36.236 00:30:36.236 Executing: test_invalid_db_write_overflow_cq 00:30:36.236 Waiting for AER completion... 00:30:36.236 Failure: test_invalid_db_write_overflow_cq 00:30:36.236 00:30:36.236 05:41:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:36.236 05:41:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:30:36.236 [2024-11-20 05:41:56.137855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:30:46.204 Executing: test_write_invalid_db 00:30:46.204 Waiting for AER completion... 00:30:46.204 Failure: test_write_invalid_db 00:30:46.205 00:30:46.205 Executing: test_invalid_db_write_overflow_sq 00:30:46.205 Waiting for AER completion... 00:30:46.205 Failure: test_invalid_db_write_overflow_sq 00:30:46.205 00:30:46.205 Executing: test_invalid_db_write_overflow_cq 00:30:46.205 Waiting for AER completion... 00:30:46.205 Failure: test_invalid_db_write_overflow_cq 00:30:46.205 00:30:46.205 05:42:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:46.205 05:42:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:30:46.463 [2024-11-20 05:42:06.231855] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:30:56.445 Executing: test_write_invalid_db 00:30:56.445 Waiting for AER completion... 00:30:56.445 Failure: test_write_invalid_db 00:30:56.445 00:30:56.445 Executing: test_invalid_db_write_overflow_sq 00:30:56.445 Waiting for AER completion... 00:30:56.445 Failure: test_invalid_db_write_overflow_sq 00:30:56.445 00:30:56.445 Executing: test_invalid_db_write_overflow_cq 00:30:56.445 Waiting for AER completion... 00:30:56.445 Failure: test_invalid_db_write_overflow_cq 00:30:56.445 00:30:56.445 05:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:56.445 05:42:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:30:56.445 [2024-11-20 05:42:16.265418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.424 Executing: test_write_invalid_db 00:31:06.424 Waiting for AER completion... 00:31:06.424 Failure: test_write_invalid_db 00:31:06.424 00:31:06.424 Executing: test_invalid_db_write_overflow_sq 00:31:06.424 Waiting for AER completion... 00:31:06.424 Failure: test_invalid_db_write_overflow_sq 00:31:06.424 00:31:06.424 Executing: test_invalid_db_write_overflow_cq 00:31:06.424 Waiting for AER completion... 00:31:06.424 Failure: test_invalid_db_write_overflow_cq 00:31:06.424 00:31:06.424 00:31:06.424 real 0m40.327s 00:31:06.424 user 0m33.188s 00:31:06.424 sys 0m6.724s 00:31:06.424 05:42:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:06.424 05:42:26 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:31:06.424 ************************************ 00:31:06.424 END TEST nvme_doorbell_aers 00:31:06.424 ************************************ 00:31:06.424 05:42:26 nvme -- nvme/nvme.sh@97 -- # uname 00:31:06.424 05:42:26 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:31:06.424 05:42:26 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:06.424 05:42:26 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:31:06.424 05:42:26 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:06.424 05:42:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:06.424 ************************************ 00:31:06.424 START TEST nvme_multi_aen 00:31:06.424 ************************************ 00:31:06.424 05:42:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:31:06.683 [2024-11-20 05:42:26.358016] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.358132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.358152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.359888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.359936] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.359952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.361279] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.361324] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.361339] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.362599] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.362639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 [2024-11-20 05:42:26.362655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65023) is not found. Dropping the request. 00:31:06.683 Child process pid: 65533 00:31:06.942 [Child] Asynchronous Event Request test 00:31:06.942 [Child] Attached to 0000:00:10.0 00:31:06.942 [Child] Attached to 0000:00:11.0 00:31:06.942 [Child] Attached to 0000:00:13.0 00:31:06.942 [Child] Attached to 0000:00:12.0 00:31:06.942 [Child] Registering asynchronous event callbacks... 00:31:06.942 [Child] Getting orig temperature thresholds of all controllers 00:31:06.942 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 [Child] Waiting for all controllers to trigger AER and reset threshold 00:31:06.942 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 [Child] Cleaning up... 00:31:06.942 Asynchronous Event Request test 00:31:06.942 Attached to 0000:00:10.0 00:31:06.942 Attached to 0000:00:11.0 00:31:06.942 Attached to 0000:00:13.0 00:31:06.942 Attached to 0000:00:12.0 00:31:06.942 Reset controller to setup AER completions for this process 00:31:06.942 Registering asynchronous event callbacks... 00:31:06.942 Getting orig temperature thresholds of all controllers 00:31:06.942 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:31:06.942 Setting all controllers temperature threshold low to trigger AER 00:31:06.942 Waiting for all controllers temperature threshold to be set lower 00:31:06.942 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:31:06.942 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:31:06.942 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:31:06.942 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:31:06.942 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:31:06.942 Waiting for all controllers to trigger AER and reset threshold 00:31:06.942 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:31:06.942 Cleaning up... 00:31:06.942 00:31:06.942 real 0m0.634s 00:31:06.942 user 0m0.212s 00:31:06.942 sys 0m0.309s 00:31:06.942 05:42:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:06.942 05:42:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:31:06.942 ************************************ 00:31:06.942 END TEST nvme_multi_aen 00:31:06.942 ************************************ 00:31:06.942 05:42:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:06.942 05:42:26 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:06.942 05:42:26 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:06.942 05:42:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:06.942 ************************************ 00:31:06.942 START TEST nvme_startup 00:31:06.942 ************************************ 00:31:06.942 05:42:26 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:31:07.201 Initializing NVMe Controllers 00:31:07.201 Attached to 0000:00:10.0 00:31:07.201 Attached to 0000:00:11.0 00:31:07.201 Attached to 0000:00:13.0 00:31:07.201 Attached to 0000:00:12.0 00:31:07.201 Initialization complete. 00:31:07.201 Time used:191091.688 (us). 00:31:07.201 00:31:07.201 real 0m0.292s 00:31:07.201 user 0m0.107s 00:31:07.201 sys 0m0.139s 00:31:07.201 05:42:27 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:07.201 05:42:27 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:31:07.201 ************************************ 00:31:07.201 END TEST nvme_startup 00:31:07.201 ************************************ 00:31:07.459 05:42:27 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:31:07.459 05:42:27 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:07.459 05:42:27 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:07.459 05:42:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:07.459 ************************************ 00:31:07.459 START TEST nvme_multi_secondary 00:31:07.459 ************************************ 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65589 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65590 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:07.459 05:42:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:31:10.742 Initializing NVMe Controllers 00:31:10.742 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:10.742 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:10.742 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:10.742 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:10.742 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:10.742 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:10.742 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:10.742 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:10.742 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:10.742 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:10.742 Initialization complete. Launching workers. 00:31:10.742 ======================================================== 00:31:10.742 Latency(us) 00:31:10.742 Device Information : IOPS MiB/s Average min max 00:31:10.742 PCIE (0000:00:10.0) NSID 1 from core 2: 2950.56 11.53 5419.15 1269.31 20323.60 00:31:10.742 PCIE (0000:00:11.0) NSID 1 from core 2: 2950.56 11.53 5421.54 1290.63 15259.38 00:31:10.742 PCIE (0000:00:13.0) NSID 1 from core 2: 2950.56 11.53 5422.05 1266.79 15450.92 00:31:10.742 PCIE (0000:00:12.0) NSID 1 from core 2: 2950.56 11.53 5421.40 1274.41 15365.54 00:31:10.742 PCIE (0000:00:12.0) NSID 2 from core 2: 2950.56 11.53 5421.82 1268.77 16189.30 00:31:10.742 PCIE (0000:00:12.0) NSID 3 from core 2: 2950.56 11.53 5421.41 1253.04 16450.41 00:31:10.742 ======================================================== 00:31:10.742 Total : 17703.36 69.15 5421.23 1253.04 20323.60 00:31:10.742 00:31:11.002 Initializing NVMe Controllers 00:31:11.002 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:11.002 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:11.002 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:11.002 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:11.002 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:11.002 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:11.002 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:11.002 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:11.002 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:11.002 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:11.002 Initialization complete. Launching workers. 00:31:11.002 ======================================================== 00:31:11.002 Latency(us) 00:31:11.002 Device Information : IOPS MiB/s Average min max 00:31:11.002 PCIE (0000:00:10.0) NSID 1 from core 1: 5491.43 21.45 2911.09 1087.89 8305.49 00:31:11.002 PCIE (0000:00:11.0) NSID 1 from core 1: 5491.43 21.45 2912.96 1120.39 8478.98 00:31:11.002 PCIE (0000:00:13.0) NSID 1 from core 1: 5491.43 21.45 2912.97 1240.78 7752.56 00:31:11.002 PCIE (0000:00:12.0) NSID 1 from core 1: 5491.43 21.45 2913.04 1123.34 7952.41 00:31:11.002 PCIE (0000:00:12.0) NSID 2 from core 1: 5491.43 21.45 2913.13 1059.75 8220.11 00:31:11.002 PCIE (0000:00:12.0) NSID 3 from core 1: 5491.43 21.45 2913.34 1252.66 8376.99 00:31:11.002 ======================================================== 00:31:11.002 Total : 32948.57 128.71 2912.75 1059.75 8478.98 00:31:11.002 00:31:11.002 05:42:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65589 00:31:12.899 Initializing NVMe Controllers 00:31:12.899 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:12.899 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:12.899 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:12.899 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:12.899 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:12.899 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:12.899 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:12.899 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:12.899 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:12.899 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:12.899 Initialization complete. Launching workers. 00:31:12.899 ======================================================== 00:31:12.899 Latency(us) 00:31:12.899 Device Information : IOPS MiB/s Average min max 00:31:12.899 PCIE (0000:00:10.0) NSID 1 from core 0: 9135.65 35.69 1749.65 839.96 8461.52 00:31:12.899 PCIE (0000:00:11.0) NSID 1 from core 0: 9135.65 35.69 1750.87 862.26 7128.67 00:31:12.899 PCIE (0000:00:13.0) NSID 1 from core 0: 9135.65 35.69 1750.82 843.97 7472.04 00:31:12.899 PCIE (0000:00:12.0) NSID 1 from core 0: 9135.65 35.69 1750.78 845.39 8080.77 00:31:12.899 PCIE (0000:00:12.0) NSID 2 from core 0: 9135.65 35.69 1750.74 817.37 8557.50 00:31:12.899 PCIE (0000:00:12.0) NSID 3 from core 0: 9135.65 35.69 1750.70 786.60 8324.06 00:31:12.899 ======================================================== 00:31:12.899 Total : 54813.91 214.12 1750.59 786.60 8557.50 00:31:12.899 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65590 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65659 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65660 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:12.899 05:42:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:16.186 Initializing NVMe Controllers 00:31:16.186 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:16.186 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:31:16.186 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:31:16.186 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:31:16.186 Initialization complete. Launching workers. 00:31:16.186 ======================================================== 00:31:16.186 Latency(us) 00:31:16.186 Device Information : IOPS MiB/s Average min max 00:31:16.186 PCIE (0000:00:10.0) NSID 1 from core 0: 6042.59 23.60 2645.60 833.20 6317.13 00:31:16.186 PCIE (0000:00:11.0) NSID 1 from core 0: 6042.59 23.60 2647.50 875.19 6883.75 00:31:16.186 PCIE (0000:00:13.0) NSID 1 from core 0: 6042.59 23.60 2647.59 872.94 6106.47 00:31:16.186 PCIE (0000:00:12.0) NSID 1 from core 0: 6042.59 23.60 2647.69 872.33 6934.36 00:31:16.186 PCIE (0000:00:12.0) NSID 2 from core 0: 6042.59 23.60 2647.82 874.62 6584.17 00:31:16.186 PCIE (0000:00:12.0) NSID 3 from core 0: 6047.93 23.62 2645.71 869.79 6857.34 00:31:16.186 ======================================================== 00:31:16.186 Total : 36260.89 141.64 2646.98 833.20 6934.36 00:31:16.186 00:31:16.186 Initializing NVMe Controllers 00:31:16.186 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:16.186 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:16.186 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:31:16.186 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:31:16.186 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:31:16.186 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:31:16.186 Initialization complete. Launching workers. 00:31:16.186 ======================================================== 00:31:16.186 Latency(us) 00:31:16.186 Device Information : IOPS MiB/s Average min max 00:31:16.186 PCIE (0000:00:10.0) NSID 1 from core 1: 5732.22 22.39 2788.86 938.15 6599.90 00:31:16.186 PCIE (0000:00:11.0) NSID 1 from core 1: 5732.22 22.39 2790.51 949.05 6779.67 00:31:16.186 PCIE (0000:00:13.0) NSID 1 from core 1: 5732.22 22.39 2790.42 942.48 6507.73 00:31:16.186 PCIE (0000:00:12.0) NSID 1 from core 1: 5732.22 22.39 2790.32 944.85 6694.02 00:31:16.186 PCIE (0000:00:12.0) NSID 2 from core 1: 5732.22 22.39 2790.22 967.17 6634.30 00:31:16.186 PCIE (0000:00:12.0) NSID 3 from core 1: 5732.22 22.39 2790.11 909.14 6595.67 00:31:16.186 ======================================================== 00:31:16.186 Total : 34393.32 134.35 2790.07 909.14 6779.67 00:31:16.186 00:31:18.109 Initializing NVMe Controllers 00:31:18.109 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:31:18.109 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:31:18.109 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:31:18.109 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:31:18.109 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:31:18.109 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:31:18.109 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:31:18.109 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:31:18.109 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:31:18.109 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:31:18.109 Initialization complete. Launching workers. 00:31:18.109 ======================================================== 00:31:18.109 Latency(us) 00:31:18.109 Device Information : IOPS MiB/s Average min max 00:31:18.109 PCIE (0000:00:10.0) NSID 1 from core 2: 3215.70 12.56 4972.24 906.01 14586.57 00:31:18.109 PCIE (0000:00:11.0) NSID 1 from core 2: 3215.70 12.56 4974.89 927.57 14432.25 00:31:18.109 PCIE (0000:00:13.0) NSID 1 from core 2: 3215.70 12.56 4975.00 942.33 18936.08 00:31:18.109 PCIE (0000:00:12.0) NSID 1 from core 2: 3215.70 12.56 4974.88 947.97 15651.29 00:31:18.109 PCIE (0000:00:12.0) NSID 2 from core 2: 3215.70 12.56 4974.77 945.17 16010.41 00:31:18.109 PCIE (0000:00:12.0) NSID 3 from core 2: 3215.70 12.56 4971.14 942.14 19063.28 00:31:18.109 ======================================================== 00:31:18.109 Total : 19294.22 75.37 4973.82 906.01 19063.28 00:31:18.109 00:31:18.368 05:42:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65659 00:31:18.368 05:42:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65660 00:31:18.368 00:31:18.368 real 0m10.969s 00:31:18.368 user 0m18.551s 00:31:18.368 sys 0m1.152s 00:31:18.368 05:42:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.368 05:42:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:31:18.368 ************************************ 00:31:18.368 END TEST nvme_multi_secondary 00:31:18.368 ************************************ 00:31:18.368 05:42:38 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:18.368 05:42:38 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:31:18.368 05:42:38 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64598 ]] 00:31:18.368 05:42:38 nvme -- common/autotest_common.sh@1092 -- # kill 64598 00:31:18.368 05:42:38 nvme -- common/autotest_common.sh@1093 -- # wait 64598 00:31:18.368 [2024-11-20 05:42:38.173918] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.174908] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.174980] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.174998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.178080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.178141] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.178154] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.178170] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.181375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.181433] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.181448] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.181465] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.185171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.185251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.185269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.368 [2024-11-20 05:42:38.185286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65532) is not found. Dropping the request. 00:31:18.627 [2024-11-20 05:42:38.364425] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:31:18.627 05:42:38 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:31:18.627 05:42:38 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:31:18.627 05:42:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:18.627 05:42:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:18.627 05:42:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:18.628 05:42:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:18.628 ************************************ 00:31:18.628 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:18.628 ************************************ 00:31:18.628 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:18.628 * Looking for test storage... 00:31:18.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.887 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:18.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.887 --rc genhtml_branch_coverage=1 00:31:18.887 --rc genhtml_function_coverage=1 00:31:18.887 --rc genhtml_legend=1 00:31:18.888 --rc geninfo_all_blocks=1 00:31:18.888 --rc geninfo_unexecuted_blocks=1 00:31:18.888 00:31:18.888 ' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:18.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.888 --rc genhtml_branch_coverage=1 00:31:18.888 --rc genhtml_function_coverage=1 00:31:18.888 --rc genhtml_legend=1 00:31:18.888 --rc geninfo_all_blocks=1 00:31:18.888 --rc geninfo_unexecuted_blocks=1 00:31:18.888 00:31:18.888 ' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:18.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.888 --rc genhtml_branch_coverage=1 00:31:18.888 --rc genhtml_function_coverage=1 00:31:18.888 --rc genhtml_legend=1 00:31:18.888 --rc geninfo_all_blocks=1 00:31:18.888 --rc geninfo_unexecuted_blocks=1 00:31:18.888 00:31:18.888 ' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:18.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.888 --rc genhtml_branch_coverage=1 00:31:18.888 --rc genhtml_function_coverage=1 00:31:18.888 --rc genhtml_legend=1 00:31:18.888 --rc geninfo_all_blocks=1 00:31:18.888 --rc geninfo_unexecuted_blocks=1 00:31:18.888 00:31:18.888 ' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65828 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65828 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65828 ']' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:18.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:18.888 05:42:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:19.167 [2024-11-20 05:42:38.894263] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:31:19.167 [2024-11-20 05:42:38.894407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65828 ] 00:31:19.439 [2024-11-20 05:42:39.100367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.439 [2024-11-20 05:42:39.252270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.439 [2024-11-20 05:42:39.252478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.439 [2024-11-20 05:42:39.252595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.439 [2024-11-20 05:42:39.252618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:20.811 nvme0n1 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:20.811 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XYgpr.txt 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:20.812 true 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732081360 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65856 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:20.812 05:42:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:22.717 [2024-11-20 05:42:42.541920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:31:22.717 [2024-11-20 05:42:42.542345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:22.717 [2024-11-20 05:42:42.542393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:22.717 [2024-11-20 05:42:42.542413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.717 [2024-11-20 05:42:42.544338] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:31:22.717 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65856 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65856 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65856 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:22.717 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XYgpr.txt 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XYgpr.txt 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65828 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65828 ']' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65828 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65828 00:31:22.977 killing process with pid 65828 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65828' 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65828 00:31:22.977 05:42:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65828 00:31:26.287 05:42:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:26.287 05:42:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:26.287 00:31:26.287 real 0m7.527s 00:31:26.287 user 0m26.167s 00:31:26.287 sys 0m1.044s 00:31:26.287 05:42:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:26.287 05:42:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:31:26.287 ************************************ 00:31:26.287 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:26.287 ************************************ 00:31:26.287 05:42:45 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:26.287 05:42:45 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:26.287 05:42:45 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:26.287 05:42:45 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:26.287 05:42:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:31:26.287 ************************************ 00:31:26.287 START TEST nvme_fio 00:31:26.287 ************************************ 00:31:26.287 05:42:45 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:31:26.287 05:42:45 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:26.287 05:42:45 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:26.287 05:42:45 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:26.287 05:42:45 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:26.287 05:42:45 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:31:26.287 05:42:45 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:26.287 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:26.287 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:26.287 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:31:26.287 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:31:26.287 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:31:26.287 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:26.287 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:26.287 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:26.287 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:26.546 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:26.546 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:31:26.806 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:26.806 05:42:46 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:26.806 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:27.065 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:27.065 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:27.065 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:27.065 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:27.065 05:42:46 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:31:27.065 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:27.065 fio-3.35 00:31:27.065 Starting 1 thread 00:31:32.387 00:31:32.387 test: (groupid=0, jobs=1): err= 0: pid=66020: Wed Nov 20 05:42:52 2024 00:31:32.387 read: IOPS=20.8k, BW=81.1MiB/s (85.1MB/s)(162MiB/2001msec) 00:31:32.387 slat (usec): min=5, max=110, avg= 6.22, stdev= 1.56 00:31:32.387 clat (usec): min=233, max=12034, avg=3068.17, stdev=522.31 00:31:32.387 lat (usec): min=239, max=12144, avg=3074.38, stdev=523.17 00:31:32.387 clat percentiles (usec): 00:31:32.387 | 1.00th=[ 2114], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:31:32.387 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:31:32.387 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3556], 00:31:32.387 | 99.00th=[ 5604], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 9110], 00:31:32.387 | 99.99th=[11600] 00:31:32.387 bw ( KiB/s): min=82224, max=85792, per=100.00%, avg=83752.00, stdev=1838.28, samples=3 00:31:32.387 iops : min=20556, max=21448, avg=20938.00, stdev=459.57, samples=3 00:31:32.387 write: IOPS=20.7k, BW=80.8MiB/s (84.7MB/s)(162MiB/2001msec); 0 zone resets 00:31:32.387 slat (nsec): min=5233, max=66148, avg=6506.93, stdev=1652.87 00:31:32.387 clat (usec): min=269, max=11741, avg=3074.01, stdev=525.09 00:31:32.387 lat (usec): min=275, max=11766, avg=3080.51, stdev=526.02 00:31:32.387 clat percentiles (usec): 00:31:32.387 | 1.00th=[ 2147], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:31:32.387 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:31:32.387 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3556], 00:31:32.387 | 99.00th=[ 5800], 99.50th=[ 7177], 99.90th=[ 8094], 99.95th=[ 9372], 00:31:32.387 | 99.99th=[11207] 00:31:32.387 bw ( KiB/s): min=82408, max=85896, per=100.00%, avg=83853.33, stdev=1819.10, samples=3 00:31:32.387 iops : min=20602, max=21474, avg=20963.33, stdev=454.78, samples=3 00:31:32.387 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:32.387 lat (msec) : 2=0.71%, 4=96.84%, 10=2.37%, 20=0.04% 00:31:32.387 cpu : usr=99.10%, sys=0.10%, ctx=4, majf=0, minf=607 00:31:32.387 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:32.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.387 issued rwts: total=41552,41376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.387 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.387 00:31:32.387 Run status group 0 (all jobs): 00:31:32.387 READ: bw=81.1MiB/s (85.1MB/s), 81.1MiB/s-81.1MiB/s (85.1MB/s-85.1MB/s), io=162MiB (170MB), run=2001-2001msec 00:31:32.387 WRITE: bw=80.8MiB/s (84.7MB/s), 80.8MiB/s-80.8MiB/s (84.7MB/s-84.7MB/s), io=162MiB (169MB), run=2001-2001msec 00:31:32.645 ----------------------------------------------------- 00:31:32.645 Suppressions used: 00:31:32.645 count bytes template 00:31:32.645 1 32 /usr/src/fio/parse.c 00:31:32.645 1 8 libtcmalloc_minimal.so 00:31:32.645 ----------------------------------------------------- 00:31:32.645 00:31:32.645 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:32.645 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:32.645 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:32.645 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:32.904 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:31:32.904 05:42:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:33.469 05:42:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:33.469 05:42:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:33.469 05:42:53 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:31:33.469 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:33.469 fio-3.35 00:31:33.469 Starting 1 thread 00:31:45.671 00:31:45.671 test: (groupid=0, jobs=1): err= 0: pid=66111: Wed Nov 20 05:43:04 2024 00:31:45.671 read: IOPS=20.2k, BW=78.8MiB/s (82.7MB/s)(158MiB/2001msec) 00:31:45.671 slat (nsec): min=5027, max=66104, avg=6430.54, stdev=2118.50 00:31:45.671 clat (usec): min=238, max=11532, avg=3158.16, stdev=790.81 00:31:45.671 lat (usec): min=244, max=11593, avg=3164.59, stdev=792.15 00:31:45.671 clat percentiles (usec): 00:31:45.671 | 1.00th=[ 2040], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2933], 00:31:45.671 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 2999], 60.00th=[ 3032], 00:31:45.671 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 4293], 00:31:45.671 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[ 8979], 99.95th=[ 9503], 00:31:45.671 | 99.99th=[11338] 00:31:45.671 bw ( KiB/s): min=74296, max=84696, per=99.27%, avg=80149.33, stdev=5321.70, samples=3 00:31:45.671 iops : min=18574, max=21174, avg=20037.33, stdev=1330.43, samples=3 00:31:45.671 write: IOPS=20.1k, BW=78.7MiB/s (82.5MB/s)(157MiB/2001msec); 0 zone resets 00:31:45.671 slat (nsec): min=5228, max=61263, avg=6703.05, stdev=2052.48 00:31:45.671 clat (usec): min=257, max=11436, avg=3157.27, stdev=792.41 00:31:45.671 lat (usec): min=264, max=11455, avg=3163.97, stdev=793.69 00:31:45.671 clat percentiles (usec): 00:31:45.671 | 1.00th=[ 2008], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2933], 00:31:45.671 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 2999], 60.00th=[ 3032], 00:31:45.671 | 70.00th=[ 3064], 80.00th=[ 3097], 90.00th=[ 3261], 95.00th=[ 4293], 00:31:45.671 | 99.00th=[ 7701], 99.50th=[ 8291], 99.90th=[ 8979], 99.95th=[ 9634], 00:31:45.671 | 99.99th=[11076] 00:31:45.671 bw ( KiB/s): min=74376, max=85000, per=99.51%, avg=80170.67, stdev=5377.38, samples=3 00:31:45.671 iops : min=18594, max=21250, avg=20042.67, stdev=1344.35, samples=3 00:31:45.671 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:45.671 lat (msec) : 2=0.90%, 4=93.00%, 10=6.03%, 20=0.03% 00:31:45.671 cpu : usr=99.15%, sys=0.00%, ctx=2, majf=0, minf=608 00:31:45.671 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:45.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.671 issued rwts: total=40389,40304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.671 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.671 00:31:45.671 Run status group 0 (all jobs): 00:31:45.671 READ: bw=78.8MiB/s (82.7MB/s), 78.8MiB/s-78.8MiB/s (82.7MB/s-82.7MB/s), io=158MiB (165MB), run=2001-2001msec 00:31:45.671 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=157MiB (165MB), run=2001-2001msec 00:31:45.671 ----------------------------------------------------- 00:31:45.671 Suppressions used: 00:31:45.671 count bytes template 00:31:45.671 1 32 /usr/src/fio/parse.c 00:31:45.671 1 8 libtcmalloc_minimal.so 00:31:45.671 ----------------------------------------------------- 00:31:45.671 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:31:45.671 05:43:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:45.671 05:43:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:45.671 05:43:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:45.671 05:43:05 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:31:45.671 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.671 fio-3.35 00:31:45.671 Starting 1 thread 00:31:52.253 00:31:52.253 test: (groupid=0, jobs=1): err= 0: pid=66255: Wed Nov 20 05:43:11 2024 00:31:52.253 read: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(156MiB/2001msec) 00:31:52.253 slat (nsec): min=4987, max=66836, avg=6355.88, stdev=1747.36 00:31:52.253 clat (usec): min=263, max=9220, avg=3186.22, stdev=557.43 00:31:52.253 lat (usec): min=269, max=9225, avg=3192.58, stdev=558.27 00:31:52.253 clat percentiles (usec): 00:31:52.253 | 1.00th=[ 2343], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 2999], 00:31:52.253 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:31:52.253 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 4228], 00:31:52.253 | 99.00th=[ 5735], 99.50th=[ 6783], 99.90th=[ 8356], 99.95th=[ 8717], 00:31:52.253 | 99.99th=[ 9110] 00:31:52.253 bw ( KiB/s): min=76768, max=81816, per=99.25%, avg=79394.67, stdev=2530.26, samples=3 00:31:52.253 iops : min=19192, max=20454, avg=19848.67, stdev=632.56, samples=3 00:31:52.253 write: IOPS=20.0k, BW=78.0MiB/s (81.7MB/s)(156MiB/2001msec); 0 zone resets 00:31:52.253 slat (nsec): min=5283, max=89938, avg=6716.07, stdev=1880.62 00:31:52.253 clat (usec): min=218, max=9393, avg=3196.54, stdev=580.98 00:31:52.253 lat (usec): min=224, max=9399, avg=3203.26, stdev=581.86 00:31:52.253 clat percentiles (usec): 00:31:52.253 | 1.00th=[ 2409], 5.00th=[ 2933], 10.00th=[ 2966], 20.00th=[ 2999], 00:31:52.253 | 30.00th=[ 3032], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:31:52.253 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3359], 95.00th=[ 4228], 00:31:52.253 | 99.00th=[ 5866], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 8979], 00:31:52.253 | 99.99th=[ 9110] 00:31:52.253 bw ( KiB/s): min=76656, max=81776, per=99.50%, avg=79426.67, stdev=2585.87, samples=3 00:31:52.253 iops : min=19164, max=20444, avg=19856.67, stdev=646.47, samples=3 00:31:52.253 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:31:52.253 lat (msec) : 2=0.51%, 4=91.66%, 10=7.79% 00:31:52.253 cpu : usr=98.70%, sys=0.25%, ctx=4, majf=0, minf=607 00:31:52.253 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:52.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:52.253 issued rwts: total=40015,39934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.253 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:52.253 00:31:52.253 Run status group 0 (all jobs): 00:31:52.253 READ: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2001-2001msec 00:31:52.253 WRITE: bw=78.0MiB/s (81.7MB/s), 78.0MiB/s-78.0MiB/s (81.7MB/s-81.7MB/s), io=156MiB (164MB), run=2001-2001msec 00:31:52.253 ----------------------------------------------------- 00:31:52.253 Suppressions used: 00:31:52.253 count bytes template 00:31:52.253 1 32 /usr/src/fio/parse.c 00:31:52.253 1 8 libtcmalloc_minimal.so 00:31:52.253 ----------------------------------------------------- 00:31:52.253 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:31:52.253 05:43:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:52.253 05:43:11 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:31:52.512 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:52.512 fio-3.35 00:31:52.512 Starting 1 thread 00:32:10.603 00:32:10.603 test: (groupid=0, jobs=1): err= 0: pid=66347: Wed Nov 20 05:43:27 2024 00:32:10.603 read: IOPS=21.3k, BW=83.1MiB/s (87.1MB/s)(166MiB/2001msec) 00:32:10.603 slat (nsec): min=5037, max=55449, avg=6137.01, stdev=1276.42 00:32:10.603 clat (usec): min=317, max=8138, avg=3005.18, stdev=364.17 00:32:10.603 lat (usec): min=324, max=8179, avg=3011.31, stdev=364.65 00:32:10.603 clat percentiles (usec): 00:32:10.603 | 1.00th=[ 2212], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2900], 00:32:10.603 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:32:10.603 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3458], 00:32:10.603 | 99.00th=[ 4146], 99.50th=[ 5145], 99.90th=[ 7635], 99.95th=[ 7832], 00:32:10.603 | 99.99th=[ 8029] 00:32:10.603 bw ( KiB/s): min=80064, max=87000, per=98.95%, avg=84189.33, stdev=3650.11, samples=3 00:32:10.603 iops : min=20016, max=21750, avg=21047.33, stdev=912.53, samples=3 00:32:10.603 write: IOPS=21.1k, BW=82.5MiB/s (86.6MB/s)(165MiB/2001msec); 0 zone resets 00:32:10.603 slat (nsec): min=5162, max=51899, avg=6364.12, stdev=1238.41 00:32:10.603 clat (usec): min=224, max=8260, avg=3009.77, stdev=375.51 00:32:10.603 lat (usec): min=230, max=8265, avg=3016.13, stdev=375.92 00:32:10.603 clat percentiles (usec): 00:32:10.603 | 1.00th=[ 2212], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:32:10.603 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:32:10.603 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3458], 00:32:10.603 | 99.00th=[ 4228], 99.50th=[ 5342], 99.90th=[ 7701], 99.95th=[ 7898], 00:32:10.603 | 99.99th=[ 8029] 00:32:10.603 bw ( KiB/s): min=79968, max=87008, per=99.68%, avg=84248.00, stdev=3758.08, samples=3 00:32:10.603 iops : min=19992, max=21752, avg=21062.00, stdev=939.52, samples=3 00:32:10.603 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:10.603 lat (msec) : 2=0.69%, 4=97.81%, 10=1.46% 00:32:10.603 cpu : usr=99.05%, sys=0.20%, ctx=3, majf=0, minf=605 00:32:10.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:10.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.603 issued rwts: total=42561,42282,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.603 00:32:10.603 Run status group 0 (all jobs): 00:32:10.603 READ: bw=83.1MiB/s (87.1MB/s), 83.1MiB/s-83.1MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:32:10.603 WRITE: bw=82.5MiB/s (86.6MB/s), 82.5MiB/s-82.5MiB/s (86.6MB/s-86.6MB/s), io=165MiB (173MB), run=2001-2001msec 00:32:10.603 ----------------------------------------------------- 00:32:10.603 Suppressions used: 00:32:10.603 count bytes template 00:32:10.603 1 32 /usr/src/fio/parse.c 00:32:10.603 1 8 libtcmalloc_minimal.so 00:32:10.603 ----------------------------------------------------- 00:32:10.603 00:32:10.603 ************************************ 00:32:10.603 END TEST nvme_fio 00:32:10.603 ************************************ 00:32:10.603 05:43:27 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:32:10.603 05:43:27 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:32:10.603 00:32:10.603 real 0m41.852s 00:32:10.603 user 0m17.827s 00:32:10.603 sys 0m46.634s 00:32:10.603 05:43:27 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.603 05:43:27 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:32:10.603 00:32:10.603 real 1m58.000s 00:32:10.603 user 3m53.111s 00:32:10.603 sys 1m2.067s 00:32:10.603 05:43:27 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.603 05:43:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:32:10.603 ************************************ 00:32:10.603 END TEST nvme 00:32:10.603 ************************************ 00:32:10.603 05:43:27 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:32:10.603 05:43:27 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:10.603 05:43:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:10.603 05:43:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:10.603 05:43:27 -- common/autotest_common.sh@10 -- # set +x 00:32:10.603 ************************************ 00:32:10.603 START TEST nvme_scc 00:32:10.603 ************************************ 00:32:10.603 05:43:27 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:32:10.603 * Looking for test storage... 00:32:10.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:10.603 05:43:28 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:10.603 05:43:28 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:10.603 05:43:28 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:10.603 05:43:28 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.603 05:43:28 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@345 -- # : 1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@368 -- # return 0 00:32:10.604 05:43:28 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.604 05:43:28 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.604 --rc genhtml_branch_coverage=1 00:32:10.604 --rc genhtml_function_coverage=1 00:32:10.604 --rc genhtml_legend=1 00:32:10.604 --rc geninfo_all_blocks=1 00:32:10.604 --rc geninfo_unexecuted_blocks=1 00:32:10.604 00:32:10.604 ' 00:32:10.604 05:43:28 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.604 --rc genhtml_branch_coverage=1 00:32:10.604 --rc genhtml_function_coverage=1 00:32:10.604 --rc genhtml_legend=1 00:32:10.604 --rc geninfo_all_blocks=1 00:32:10.604 --rc geninfo_unexecuted_blocks=1 00:32:10.604 00:32:10.604 ' 00:32:10.604 05:43:28 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.604 --rc genhtml_branch_coverage=1 00:32:10.604 --rc genhtml_function_coverage=1 00:32:10.604 --rc genhtml_legend=1 00:32:10.604 --rc geninfo_all_blocks=1 00:32:10.604 --rc geninfo_unexecuted_blocks=1 00:32:10.604 00:32:10.604 ' 00:32:10.604 05:43:28 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.604 --rc genhtml_branch_coverage=1 00:32:10.604 --rc genhtml_function_coverage=1 00:32:10.604 --rc genhtml_legend=1 00:32:10.604 --rc geninfo_all_blocks=1 00:32:10.604 --rc geninfo_unexecuted_blocks=1 00:32:10.604 00:32:10.604 ' 00:32:10.604 05:43:28 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.604 05:43:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.604 05:43:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.604 05:43:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.604 05:43:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.604 05:43:28 nvme_scc -- paths/export.sh@5 -- # export PATH 00:32:10.604 05:43:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:10.604 05:43:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:32:10.604 05:43:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:10.604 05:43:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:32:10.604 05:43:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:32:10.604 05:43:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:32:10.604 05:43:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:10.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:10.604 Waiting for block devices as requested 00:32:10.604 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:10.604 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:10.604 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:10.604 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:14.797 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:14.797 05:43:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:14.797 05:43:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.797 05:43:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:14.797 05:43:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.797 05:43:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.797 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.798 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.799 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:14.800 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:14.801 05:43:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.801 05:43:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:14.801 05:43:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.801 05:43:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:14.801 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:14.802 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:14.803 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:14.804 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:14.805 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:14.806 05:43:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:14.806 05:43:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:14.806 05:43:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:14.806 05:43:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:14.806 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.807 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:14.808 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.809 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:14.810 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.073 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.074 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.075 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:15.076 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:15.077 05:43:34 nvme_scc -- scripts/common.sh@18 -- # local i 00:32:15.077 05:43:34 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:15.077 05:43:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:15.077 05:43:34 nvme_scc -- scripts/common.sh@27 -- # return 0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@18 -- # shift 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.077 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:15.078 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.079 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:32:15.080 05:43:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:32:15.080 05:43:34 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:32:15.080 05:43:34 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:15.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:16.589 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:16.589 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:16.589 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:16.589 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:16.589 05:43:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:16.589 05:43:36 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:16.589 05:43:36 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:16.589 05:43:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:16.589 ************************************ 00:32:16.589 START TEST nvme_simple_copy 00:32:16.589 ************************************ 00:32:16.589 05:43:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:32:16.848 Initializing NVMe Controllers 00:32:16.848 Attaching to 0000:00:10.0 00:32:16.848 Controller supports SCC. Attached to 0000:00:10.0 00:32:16.848 Namespace ID: 1 size: 6GB 00:32:16.848 Initialization complete. 00:32:16.848 00:32:16.848 Controller QEMU NVMe Ctrl (12340 ) 00:32:16.848 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:32:16.848 Namespace Block Size:4096 00:32:16.848 Writing LBAs 0 to 63 with Random Data 00:32:16.848 Copied LBAs from 0 - 63 to the Destination LBA 256 00:32:16.848 LBAs matching Written Data: 64 00:32:16.848 00:32:16.848 real 0m0.319s 00:32:16.848 user 0m0.114s 00:32:16.848 sys 0m0.103s 00:32:16.848 05:43:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:16.848 ************************************ 00:32:16.848 END TEST nvme_simple_copy 00:32:16.848 ************************************ 00:32:16.848 05:43:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 00:32:17.112 real 0m8.857s 00:32:17.112 user 0m1.476s 00:32:17.112 sys 0m2.425s 00:32:17.112 ************************************ 00:32:17.112 END TEST nvme_scc 00:32:17.112 ************************************ 00:32:17.112 05:43:36 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:17.112 05:43:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 05:43:36 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:32:17.112 05:43:36 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:32:17.112 05:43:36 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:32:17.112 05:43:36 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:32:17.112 05:43:36 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:32:17.112 05:43:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:17.112 05:43:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:17.112 05:43:36 -- common/autotest_common.sh@10 -- # set +x 00:32:17.112 ************************************ 00:32:17.112 START TEST nvme_fdp 00:32:17.112 ************************************ 00:32:17.112 05:43:36 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:32:17.112 * Looking for test storage... 00:32:17.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:17.112 05:43:37 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:17.112 05:43:37 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:17.112 05:43:37 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:17.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.381 --rc genhtml_branch_coverage=1 00:32:17.381 --rc genhtml_function_coverage=1 00:32:17.381 --rc genhtml_legend=1 00:32:17.381 --rc geninfo_all_blocks=1 00:32:17.381 --rc geninfo_unexecuted_blocks=1 00:32:17.381 00:32:17.381 ' 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:17.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.381 --rc genhtml_branch_coverage=1 00:32:17.381 --rc genhtml_function_coverage=1 00:32:17.381 --rc genhtml_legend=1 00:32:17.381 --rc geninfo_all_blocks=1 00:32:17.381 --rc geninfo_unexecuted_blocks=1 00:32:17.381 00:32:17.381 ' 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:17.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.381 --rc genhtml_branch_coverage=1 00:32:17.381 --rc genhtml_function_coverage=1 00:32:17.381 --rc genhtml_legend=1 00:32:17.381 --rc geninfo_all_blocks=1 00:32:17.381 --rc geninfo_unexecuted_blocks=1 00:32:17.381 00:32:17.381 ' 00:32:17.381 05:43:37 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:17.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.381 --rc genhtml_branch_coverage=1 00:32:17.381 --rc genhtml_function_coverage=1 00:32:17.381 --rc genhtml_legend=1 00:32:17.381 --rc geninfo_all_blocks=1 00:32:17.381 --rc geninfo_unexecuted_blocks=1 00:32:17.381 00:32:17.381 ' 00:32:17.381 05:43:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.381 05:43:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.381 05:43:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.381 05:43:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.381 05:43:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.381 05:43:37 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:32:17.381 05:43:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:32:17.381 05:43:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:32:17.382 05:43:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:32:17.382 05:43:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.382 05:43:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:17.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:18.207 Waiting for block devices as requested 00:32:18.207 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.207 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.465 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.465 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:23.747 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:23.747 05:43:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:32:23.747 05:43:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:23.747 05:43:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:23.747 05:43:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:23.747 05:43:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:32:23.747 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.748 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:23.749 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.750 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.751 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:32:23.752 05:43:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:23.752 05:43:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:23.752 05:43:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:23.752 05:43:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:32:23.752 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.753 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.754 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.755 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:23.756 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:32:23.757 05:43:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:23.757 05:43:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:23.757 05:43:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:23.757 05:43:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.757 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.758 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.759 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:32:23.760 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:32:23.761 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:32:23.762 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.763 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:32:23.764 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:23.765 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:23.765 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:32:23.765 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:32:23.765 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:32:24.027 05:43:43 nvme_fdp -- scripts/common.sh@18 -- # local i 00:32:24.027 05:43:43 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:24.027 05:43:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:24.027 05:43:43 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:32:24.027 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.028 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:32:24.029 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:32:24.030 05:43:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:32:24.030 05:43:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:32:24.030 05:43:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:24.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:25.559 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:32:25.559 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:25.559 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:25.559 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:32:25.559 05:43:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:25.559 05:43:45 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:25.559 05:43:45 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:25.559 05:43:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:25.559 ************************************ 00:32:25.559 START TEST nvme_flexible_data_placement 00:32:25.559 ************************************ 00:32:25.559 05:43:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:32:25.820 Initializing NVMe Controllers 00:32:25.820 Attaching to 0000:00:13.0 00:32:25.820 Controller supports FDP Attached to 0000:00:13.0 00:32:25.820 Namespace ID: 1 Endurance Group ID: 1 00:32:25.820 Initialization complete. 00:32:25.820 00:32:25.820 ================================== 00:32:25.820 == FDP tests for Namespace: #01 == 00:32:25.820 ================================== 00:32:25.820 00:32:25.820 Get Feature: FDP: 00:32:25.820 ================= 00:32:25.820 Enabled: Yes 00:32:25.820 FDP configuration Index: 0 00:32:25.820 00:32:25.820 FDP configurations log page 00:32:25.820 =========================== 00:32:25.820 Number of FDP configurations: 1 00:32:25.820 Version: 0 00:32:25.820 Size: 112 00:32:25.820 FDP Configuration Descriptor: 0 00:32:25.820 Descriptor Size: 96 00:32:25.820 Reclaim Group Identifier format: 2 00:32:25.820 FDP Volatile Write Cache: Not Present 00:32:25.820 FDP Configuration: Valid 00:32:25.820 Vendor Specific Size: 0 00:32:25.820 Number of Reclaim Groups: 2 00:32:25.820 Number of Recalim Unit Handles: 8 00:32:25.820 Max Placement Identifiers: 128 00:32:25.820 Number of Namespaces Suppprted: 256 00:32:25.820 Reclaim unit Nominal Size: 6000000 bytes 00:32:25.820 Estimated Reclaim Unit Time Limit: Not Reported 00:32:25.820 RUH Desc #000: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #001: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #002: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #003: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #004: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #005: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #006: RUH Type: Initially Isolated 00:32:25.820 RUH Desc #007: RUH Type: Initially Isolated 00:32:25.820 00:32:25.820 FDP reclaim unit handle usage log page 00:32:25.820 ====================================== 00:32:25.820 Number of Reclaim Unit Handles: 8 00:32:25.820 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:32:25.820 RUH Usage Desc #001: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #002: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #003: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #004: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #005: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #006: RUH Attributes: Unused 00:32:25.820 RUH Usage Desc #007: RUH Attributes: Unused 00:32:25.820 00:32:25.820 FDP statistics log page 00:32:25.820 ======================= 00:32:25.820 Host bytes with metadata written: 835538944 00:32:25.820 Media bytes with metadata written: 835694592 00:32:25.820 Media bytes erased: 0 00:32:25.820 00:32:25.820 FDP Reclaim unit handle status 00:32:25.820 ============================== 00:32:25.820 Number of RUHS descriptors: 2 00:32:25.820 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000432b 00:32:25.820 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:32:25.820 00:32:25.820 FDP write on placement id: 0 success 00:32:25.820 00:32:25.820 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:32:25.820 00:32:25.820 IO mgmt send: RUH update for Placement ID: #0 Success 00:32:25.820 00:32:25.820 Get Feature: FDP Events for Placement handle: #0 00:32:25.820 ======================== 00:32:25.820 Number of FDP Events: 6 00:32:25.820 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:32:25.820 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:32:25.820 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:32:25.820 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:32:25.820 FDP Event: #4 Type: Media Reallocated Enabled: No 00:32:25.820 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:32:25.820 00:32:25.820 FDP events log page 00:32:25.820 =================== 00:32:25.820 Number of FDP events: 1 00:32:25.820 FDP Event #0: 00:32:25.820 Event Type: RU Not Written to Capacity 00:32:25.820 Placement Identifier: Valid 00:32:25.820 NSID: Valid 00:32:25.820 Location: Valid 00:32:25.820 Placement Identifier: 0 00:32:25.820 Event Timestamp: 9 00:32:25.820 Namespace Identifier: 1 00:32:25.820 Reclaim Group Identifier: 0 00:32:25.820 Reclaim Unit Handle Identifier: 0 00:32:25.820 00:32:25.820 FDP test passed 00:32:25.820 00:32:25.820 real 0m0.312s 00:32:25.820 user 0m0.113s 00:32:25.820 sys 0m0.098s 00:32:25.820 ************************************ 00:32:25.820 END TEST nvme_flexible_data_placement 00:32:25.820 ************************************ 00:32:25.820 05:43:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:25.820 05:43:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:32:26.080 ************************************ 00:32:26.080 END TEST nvme_fdp 00:32:26.080 ************************************ 00:32:26.080 00:32:26.080 real 0m8.896s 00:32:26.080 user 0m1.494s 00:32:26.080 sys 0m2.451s 00:32:26.080 05:43:45 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:26.080 05:43:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:32:26.080 05:43:45 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:32:26.080 05:43:45 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:26.080 05:43:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:26.080 05:43:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:26.080 05:43:45 -- common/autotest_common.sh@10 -- # set +x 00:32:26.080 ************************************ 00:32:26.080 START TEST nvme_rpc 00:32:26.080 ************************************ 00:32:26.080 05:43:45 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:32:26.080 * Looking for test storage... 00:32:26.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:26.080 05:43:45 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:26.080 05:43:45 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:32:26.080 05:43:45 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:26.340 05:43:46 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:26.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.340 --rc genhtml_branch_coverage=1 00:32:26.340 --rc genhtml_function_coverage=1 00:32:26.340 --rc genhtml_legend=1 00:32:26.340 --rc geninfo_all_blocks=1 00:32:26.340 --rc geninfo_unexecuted_blocks=1 00:32:26.340 00:32:26.340 ' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:26.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.340 --rc genhtml_branch_coverage=1 00:32:26.340 --rc genhtml_function_coverage=1 00:32:26.340 --rc genhtml_legend=1 00:32:26.340 --rc geninfo_all_blocks=1 00:32:26.340 --rc geninfo_unexecuted_blocks=1 00:32:26.340 00:32:26.340 ' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:26.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.340 --rc genhtml_branch_coverage=1 00:32:26.340 --rc genhtml_function_coverage=1 00:32:26.340 --rc genhtml_legend=1 00:32:26.340 --rc geninfo_all_blocks=1 00:32:26.340 --rc geninfo_unexecuted_blocks=1 00:32:26.340 00:32:26.340 ' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:26.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:26.340 --rc genhtml_branch_coverage=1 00:32:26.340 --rc genhtml_function_coverage=1 00:32:26.340 --rc genhtml_legend=1 00:32:26.340 --rc geninfo_all_blocks=1 00:32:26.340 --rc geninfo_unexecuted_blocks=1 00:32:26.340 00:32:26.340 ' 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67860 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:32:26.340 05:43:46 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67860 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67860 ']' 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:26.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:26.340 05:43:46 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:26.599 [2024-11-20 05:43:46.327515] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:32:26.599 [2024-11-20 05:43:46.327667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67860 ] 00:32:26.858 [2024-11-20 05:43:46.517549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:26.858 [2024-11-20 05:43:46.643543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.858 [2024-11-20 05:43:46.643581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.799 05:43:47 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:27.799 05:43:47 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:32:27.799 05:43:47 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:32:28.059 Nvme0n1 00:32:28.059 05:43:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:32:28.059 05:43:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:32:28.319 request: 00:32:28.319 { 00:32:28.319 "bdev_name": "Nvme0n1", 00:32:28.319 "filename": "non_existing_file", 00:32:28.319 "method": "bdev_nvme_apply_firmware", 00:32:28.319 "req_id": 1 00:32:28.319 } 00:32:28.319 Got JSON-RPC error response 00:32:28.319 response: 00:32:28.319 { 00:32:28.319 "code": -32603, 00:32:28.319 "message": "open file failed." 00:32:28.319 } 00:32:28.319 05:43:48 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:32:28.319 05:43:48 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:32:28.319 05:43:48 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:28.579 05:43:48 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:32:28.579 05:43:48 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67860 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67860 ']' 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67860 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67860 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67860' 00:32:28.579 killing process with pid 67860 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67860 00:32:28.579 05:43:48 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67860 00:32:31.874 ************************************ 00:32:31.874 END TEST nvme_rpc 00:32:31.874 ************************************ 00:32:31.874 00:32:31.874 real 0m5.266s 00:32:31.874 user 0m9.757s 00:32:31.874 sys 0m0.856s 00:32:31.874 05:43:51 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:31.874 05:43:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:31.874 05:43:51 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:31.874 05:43:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:31.874 05:43:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:31.874 05:43:51 -- common/autotest_common.sh@10 -- # set +x 00:32:31.874 ************************************ 00:32:31.874 START TEST nvme_rpc_timeouts 00:32:31.874 ************************************ 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:32:31.874 * Looking for test storage... 00:32:31.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.874 05:43:51 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:31.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.874 --rc genhtml_branch_coverage=1 00:32:31.874 --rc genhtml_function_coverage=1 00:32:31.874 --rc genhtml_legend=1 00:32:31.874 --rc geninfo_all_blocks=1 00:32:31.874 --rc geninfo_unexecuted_blocks=1 00:32:31.874 00:32:31.874 ' 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:31.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.874 --rc genhtml_branch_coverage=1 00:32:31.874 --rc genhtml_function_coverage=1 00:32:31.874 --rc genhtml_legend=1 00:32:31.874 --rc geninfo_all_blocks=1 00:32:31.874 --rc geninfo_unexecuted_blocks=1 00:32:31.874 00:32:31.874 ' 00:32:31.874 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:31.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.874 --rc genhtml_branch_coverage=1 00:32:31.874 --rc genhtml_function_coverage=1 00:32:31.874 --rc genhtml_legend=1 00:32:31.874 --rc geninfo_all_blocks=1 00:32:31.875 --rc geninfo_unexecuted_blocks=1 00:32:31.875 00:32:31.875 ' 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:31.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.875 --rc genhtml_branch_coverage=1 00:32:31.875 --rc genhtml_function_coverage=1 00:32:31.875 --rc genhtml_legend=1 00:32:31.875 --rc geninfo_all_blocks=1 00:32:31.875 --rc geninfo_unexecuted_blocks=1 00:32:31.875 00:32:31.875 ' 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67946 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67946 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67979 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:32:31.875 05:43:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67979 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67979 ']' 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:31.875 05:43:51 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:31.875 [2024-11-20 05:43:51.506131] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:32:31.875 [2024-11-20 05:43:51.506304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67979 ] 00:32:31.875 [2024-11-20 05:43:51.693085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:32.135 [2024-11-20 05:43:51.837832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.135 [2024-11-20 05:43:51.837900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.073 Checking default timeout settings: 00:32:33.073 05:43:52 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:33.073 05:43:52 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:32:33.073 05:43:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:32:33.073 05:43:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:33.643 Making settings changes with rpc: 00:32:33.643 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:32:33.643 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:32:33.643 Check default vs. modified settings: 00:32:33.643 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:32:33.643 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 Setting action_on_timeout is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 Setting timeout_us is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:32:34.213 Setting timeout_admin_us is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67946 /tmp/settings_modified_67946 00:32:34.213 05:43:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67979 00:32:34.213 05:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67979 ']' 00:32:34.213 05:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67979 00:32:34.213 05:43:53 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67979 00:32:34.213 killing process with pid 67979 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67979' 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67979 00:32:34.213 05:43:54 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67979 00:32:37.614 RPC TIMEOUT SETTING TEST PASSED. 00:32:37.614 05:43:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:32:37.614 ************************************ 00:32:37.614 END TEST nvme_rpc_timeouts 00:32:37.614 ************************************ 00:32:37.614 00:32:37.614 real 0m5.671s 00:32:37.614 user 0m10.603s 00:32:37.614 sys 0m0.978s 00:32:37.614 05:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:37.614 05:43:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:32:37.614 05:43:56 -- spdk/autotest.sh@239 -- # uname -s 00:32:37.614 05:43:56 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:32:37.614 05:43:56 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:37.614 05:43:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:37.614 05:43:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:37.614 05:43:56 -- common/autotest_common.sh@10 -- # set +x 00:32:37.614 ************************************ 00:32:37.614 START TEST sw_hotplug 00:32:37.614 ************************************ 00:32:37.614 05:43:56 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:32:37.614 * Looking for test storage... 00:32:37.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:37.614 05:43:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.614 --rc genhtml_branch_coverage=1 00:32:37.614 --rc genhtml_function_coverage=1 00:32:37.614 --rc genhtml_legend=1 00:32:37.614 --rc geninfo_all_blocks=1 00:32:37.614 --rc geninfo_unexecuted_blocks=1 00:32:37.614 00:32:37.614 ' 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.614 --rc genhtml_branch_coverage=1 00:32:37.614 --rc genhtml_function_coverage=1 00:32:37.614 --rc genhtml_legend=1 00:32:37.614 --rc geninfo_all_blocks=1 00:32:37.614 --rc geninfo_unexecuted_blocks=1 00:32:37.614 00:32:37.614 ' 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.614 --rc genhtml_branch_coverage=1 00:32:37.614 --rc genhtml_function_coverage=1 00:32:37.614 --rc genhtml_legend=1 00:32:37.614 --rc geninfo_all_blocks=1 00:32:37.614 --rc geninfo_unexecuted_blocks=1 00:32:37.614 00:32:37.614 ' 00:32:37.614 05:43:57 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:37.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:37.614 --rc genhtml_branch_coverage=1 00:32:37.614 --rc genhtml_function_coverage=1 00:32:37.614 --rc genhtml_legend=1 00:32:37.614 --rc geninfo_all_blocks=1 00:32:37.614 --rc geninfo_unexecuted_blocks=1 00:32:37.614 00:32:37.614 ' 00:32:37.614 05:43:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:37.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:38.136 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:38.136 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:38.136 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:38.136 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:38.136 05:43:57 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:32:38.136 05:43:57 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:32:38.136 05:43:57 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:32:38.136 05:43:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:32:38.136 05:43:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:32:38.397 05:43:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:32:38.397 05:43:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:32:38.397 05:43:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:32:38.397 05:43:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:38.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:39.262 Waiting for block devices as requested 00:32:39.262 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:39.262 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:39.262 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:39.543 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.818 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:44.818 05:44:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:32:44.818 05:44:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:45.075 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:32:45.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:45.333 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:32:45.591 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:32:45.850 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:45.850 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68869 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:32:46.110 05:44:05 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:32:46.110 05:44:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:32:46.368 Initializing NVMe Controllers 00:32:46.368 Attaching to 0000:00:10.0 00:32:46.368 Attaching to 0000:00:11.0 00:32:46.368 Attached to 0000:00:11.0 00:32:46.368 Attached to 0000:00:10.0 00:32:46.368 Initialization complete. Starting I/O... 00:32:46.368 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:32:46.368 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:32:46.368 00:32:47.304 QEMU NVMe Ctrl (12341 ): 1355 I/Os completed (+1355) 00:32:47.304 QEMU NVMe Ctrl (12340 ): 1361 I/Os completed (+1361) 00:32:47.304 00:32:48.680 QEMU NVMe Ctrl (12341 ): 3199 I/Os completed (+1844) 00:32:48.680 QEMU NVMe Ctrl (12340 ): 3207 I/Os completed (+1846) 00:32:48.680 00:32:49.615 QEMU NVMe Ctrl (12341 ): 5235 I/Os completed (+2036) 00:32:49.615 QEMU NVMe Ctrl (12340 ): 5247 I/Os completed (+2040) 00:32:49.615 00:32:50.551 QEMU NVMe Ctrl (12341 ): 7388 I/Os completed (+2153) 00:32:50.551 QEMU NVMe Ctrl (12340 ): 7475 I/Os completed (+2228) 00:32:50.551 00:32:51.531 QEMU NVMe Ctrl (12341 ): 9652 I/Os completed (+2264) 00:32:51.531 QEMU NVMe Ctrl (12340 ): 9739 I/Os completed (+2264) 00:32:51.531 00:32:52.098 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:32:52.098 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:52.098 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:52.098 [2024-11-20 05:44:11.953143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:32:52.098 Controller removed: QEMU NVMe Ctrl (12340 ) 00:32:52.099 [2024-11-20 05:44:11.954823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.954886] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.954909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.954933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:32:52.099 [2024-11-20 05:44:11.957690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.957748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.957766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:11.957784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:32:52.099 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:32:52.099 [2024-11-20 05:44:11.998225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:32:52.099 Controller removed: QEMU NVMe Ctrl (12341 ) 00:32:52.099 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:32:52.099 05:44:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:32:52.099 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:32:52.099 EAL: Scan for (pci) bus failed. 00:32:52.099 [2024-11-20 05:44:12.000834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.000920] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.000972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.001019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:32:52.099 [2024-11-20 05:44:12.006011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.006196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.006246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.099 [2024-11-20 05:44:12.006280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:32:52.358 Attaching to 0000:00:10.0 00:32:52.358 Attached to 0000:00:10.0 00:32:52.358 QEMU NVMe Ctrl (12340 ): 24 I/Os completed (+24) 00:32:52.358 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:32:52.358 05:44:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:32:52.358 Attaching to 0000:00:11.0 00:32:52.358 Attached to 0000:00:11.0 00:32:53.294 QEMU NVMe Ctrl (12340 ): 2232 I/Os completed (+2208) 00:32:53.294 QEMU NVMe Ctrl (12341 ): 2101 I/Os completed (+2101) 00:32:53.294 00:32:54.669 QEMU NVMe Ctrl (12340 ): 4470 I/Os completed (+2238) 00:32:54.669 QEMU NVMe Ctrl (12341 ): 4401 I/Os completed (+2300) 00:32:54.669 00:32:55.606 QEMU NVMe Ctrl (12340 ): 6706 I/Os completed (+2236) 00:32:55.606 QEMU NVMe Ctrl (12341 ): 6637 I/Os completed (+2236) 00:32:55.606 00:32:56.542 QEMU NVMe Ctrl (12340 ): 8962 I/Os completed (+2256) 00:32:56.542 QEMU NVMe Ctrl (12341 ): 8893 I/Os completed (+2256) 00:32:56.542 00:32:57.477 QEMU NVMe Ctrl (12340 ): 11242 I/Os completed (+2280) 00:32:57.477 QEMU NVMe Ctrl (12341 ): 11173 I/Os completed (+2280) 00:32:57.477 00:32:58.415 QEMU NVMe Ctrl (12340 ): 13314 I/Os completed (+2072) 00:32:58.415 QEMU NVMe Ctrl (12341 ): 13322 I/Os completed (+2149) 00:32:58.415 00:32:59.352 QEMU NVMe Ctrl (12340 ): 15365 I/Os completed (+2051) 00:32:59.352 QEMU NVMe Ctrl (12341 ): 15366 I/Os completed (+2044) 00:32:59.352 00:33:00.288 QEMU NVMe Ctrl (12340 ): 17365 I/Os completed (+2000) 00:33:00.288 QEMU NVMe Ctrl (12341 ): 17401 I/Os completed (+2035) 00:33:00.288 00:33:01.664 QEMU NVMe Ctrl (12340 ): 19464 I/Os completed (+2099) 00:33:01.664 QEMU NVMe Ctrl (12341 ): 19520 I/Os completed (+2119) 00:33:01.664 00:33:02.599 QEMU NVMe Ctrl (12340 ): 21652 I/Os completed (+2188) 00:33:02.599 QEMU NVMe Ctrl (12341 ): 21718 I/Os completed (+2198) 00:33:02.599 00:33:03.537 QEMU NVMe Ctrl (12340 ): 23976 I/Os completed (+2324) 00:33:03.537 QEMU NVMe Ctrl (12341 ): 24042 I/Os completed (+2324) 00:33:03.537 00:33:04.474 QEMU NVMe Ctrl (12340 ): 26315 I/Os completed (+2339) 00:33:04.474 QEMU NVMe Ctrl (12341 ): 26381 I/Os completed (+2339) 00:33:04.474 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:04.474 [2024-11-20 05:44:24.243279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:04.474 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:04.474 [2024-11-20 05:44:24.245799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.245946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.245984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.246019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:04.474 [2024-11-20 05:44:24.249770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.249874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.249904] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.249930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:04.474 [2024-11-20 05:44:24.271299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:04.474 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:04.474 [2024-11-20 05:44:24.273563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.273631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.273672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.273701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:04.474 [2024-11-20 05:44:24.277211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.277271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.277310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 [2024-11-20 05:44:24.277336] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:04.474 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:04.474 EAL: Scan for (pci) bus failed. 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:04.474 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:04.733 Attaching to 0000:00:10.0 00:33:04.733 Attached to 0000:00:10.0 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:04.733 05:44:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:04.733 Attaching to 0000:00:11.0 00:33:04.733 Attached to 0000:00:11.0 00:33:05.299 QEMU NVMe Ctrl (12340 ): 1468 I/Os completed (+1468) 00:33:05.299 QEMU NVMe Ctrl (12341 ): 1274 I/Os completed (+1274) 00:33:05.299 00:33:06.675 QEMU NVMe Ctrl (12340 ): 3720 I/Os completed (+2252) 00:33:06.675 QEMU NVMe Ctrl (12341 ): 3526 I/Os completed (+2252) 00:33:06.675 00:33:07.241 QEMU NVMe Ctrl (12340 ): 5964 I/Os completed (+2244) 00:33:07.241 QEMU NVMe Ctrl (12341 ): 5770 I/Os completed (+2244) 00:33:07.241 00:33:08.646 QEMU NVMe Ctrl (12340 ): 8140 I/Os completed (+2176) 00:33:08.646 QEMU NVMe Ctrl (12341 ): 8010 I/Os completed (+2240) 00:33:08.646 00:33:09.582 QEMU NVMe Ctrl (12340 ): 10392 I/Os completed (+2252) 00:33:09.583 QEMU NVMe Ctrl (12341 ): 10262 I/Os completed (+2252) 00:33:09.583 00:33:10.519 QEMU NVMe Ctrl (12340 ): 12688 I/Os completed (+2296) 00:33:10.519 QEMU NVMe Ctrl (12341 ): 12559 I/Os completed (+2297) 00:33:10.519 00:33:11.456 QEMU NVMe Ctrl (12340 ): 15028 I/Os completed (+2340) 00:33:11.456 QEMU NVMe Ctrl (12341 ): 14899 I/Os completed (+2340) 00:33:11.456 00:33:12.394 QEMU NVMe Ctrl (12340 ): 17340 I/Os completed (+2312) 00:33:12.394 QEMU NVMe Ctrl (12341 ): 17211 I/Os completed (+2312) 00:33:12.394 00:33:13.330 QEMU NVMe Ctrl (12340 ): 19572 I/Os completed (+2232) 00:33:13.330 QEMU NVMe Ctrl (12341 ): 19467 I/Os completed (+2256) 00:33:13.330 00:33:14.268 QEMU NVMe Ctrl (12340 ): 21742 I/Os completed (+2170) 00:33:14.268 QEMU NVMe Ctrl (12341 ): 21666 I/Os completed (+2199) 00:33:14.268 00:33:15.647 QEMU NVMe Ctrl (12340 ): 24088 I/Os completed (+2346) 00:33:15.647 QEMU NVMe Ctrl (12341 ): 24008 I/Os completed (+2342) 00:33:15.647 00:33:16.584 QEMU NVMe Ctrl (12340 ): 26432 I/Os completed (+2344) 00:33:16.584 QEMU NVMe Ctrl (12341 ): 26352 I/Os completed (+2344) 00:33:16.584 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:16.844 [2024-11-20 05:44:36.588045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:16.844 Controller removed: QEMU NVMe Ctrl (12340 ) 00:33:16.844 [2024-11-20 05:44:36.589718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.589793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.589907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.589973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:16.844 [2024-11-20 05:44:36.593083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.593173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.593198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.593218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:16.844 [2024-11-20 05:44:36.623809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:16.844 Controller removed: QEMU NVMe Ctrl (12341 ) 00:33:16.844 [2024-11-20 05:44:36.625514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.625631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.625688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.625735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:16.844 [2024-11-20 05:44:36.628630] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.628714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.628768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 [2024-11-20 05:44:36.628818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:16.844 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:33:16.844 EAL: Scan for (pci) bus failed. 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:16.844 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:17.104 Attaching to 0000:00:10.0 00:33:17.104 Attached to 0000:00:10.0 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:17.104 05:44:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:17.104 Attaching to 0000:00:11.0 00:33:17.104 Attached to 0000:00:11.0 00:33:17.104 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:33:17.104 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:33:17.104 [2024-11-20 05:44:36.917100] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:33:29.371 05:44:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:33:29.371 05:44:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:29.371 05:44:48 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.96 00:33:29.371 05:44:48 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.96 00:33:29.371 05:44:48 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:33:29.371 05:44:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.96 00:33:29.371 05:44:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.96 2 00:33:29.371 remove_attach_helper took 42.96s to complete (handling 2 nvme drive(s)) 05:44:48 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68869 00:33:35.938 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68869) - No such process 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68869 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69404 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:33:35.938 05:44:54 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69404 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 69404 ']' 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:35.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:35.938 05:44:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:35.938 [2024-11-20 05:44:55.047795] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:33:35.938 [2024-11-20 05:44:55.047966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69404 ] 00:33:35.938 [2024-11-20 05:44:55.222642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.938 [2024-11-20 05:44:55.396863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:33:36.875 05:44:56 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:33:36.875 05:44:56 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:43.440 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:43.440 05:45:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.440 05:45:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:43.440 [2024-11-20 05:45:02.700326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:43.440 [2024-11-20 05:45:02.703251] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.440 [2024-11-20 05:45:02.703357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.440 [2024-11-20 05:45:02.703410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.440 [2024-11-20 05:45:02.703442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.440 [2024-11-20 05:45:02.703455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.440 [2024-11-20 05:45:02.703468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.440 [2024-11-20 05:45:02.703481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.440 [2024-11-20 05:45:02.703494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.440 [2024-11-20 05:45:02.703505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.440 [2024-11-20 05:45:02.703534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.441 [2024-11-20 05:45:02.703545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.441 [2024-11-20 05:45:02.703558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.441 05:45:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.441 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:33:43.441 05:45:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:43.441 [2024-11-20 05:45:03.099616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:43.441 [2024-11-20 05:45:03.102556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.441 [2024-11-20 05:45:03.102679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.441 [2024-11-20 05:45:03.102705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.441 [2024-11-20 05:45:03.102732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.441 [2024-11-20 05:45:03.102746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.441 [2024-11-20 05:45:03.102757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.441 [2024-11-20 05:45:03.102772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.441 [2024-11-20 05:45:03.102782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.441 [2024-11-20 05:45:03.102795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.441 [2024-11-20 05:45:03.102823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:43.441 [2024-11-20 05:45:03.102837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.441 [2024-11-20 05:45:03.102847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:43.441 05:45:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.441 05:45:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:43.441 05:45:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:43.441 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:43.699 05:45:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:55.917 [2024-11-20 05:45:15.675608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:33:55.917 [2024-11-20 05:45:15.678446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:55.917 [2024-11-20 05:45:15.678549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.917 [2024-11-20 05:45:15.678612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.917 [2024-11-20 05:45:15.678689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:55.917 [2024-11-20 05:45:15.678735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.917 [2024-11-20 05:45:15.678798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.917 [2024-11-20 05:45:15.678870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:55.917 [2024-11-20 05:45:15.678908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.917 [2024-11-20 05:45:15.678968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.917 [2024-11-20 05:45:15.679026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:55.917 [2024-11-20 05:45:15.679061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:55.917 [2024-11-20 05:45:15.679111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.917 05:45:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:33:55.917 05:45:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:33:56.177 [2024-11-20 05:45:16.074850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:33:56.177 [2024-11-20 05:45:16.077691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:56.177 [2024-11-20 05:45:16.077858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.177 [2024-11-20 05:45:16.077928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.177 [2024-11-20 05:45:16.077998] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:56.177 [2024-11-20 05:45:16.078041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.177 [2024-11-20 05:45:16.078091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.177 [2024-11-20 05:45:16.078141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:56.177 [2024-11-20 05:45:16.078174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.177 [2024-11-20 05:45:16.078227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.177 [2024-11-20 05:45:16.078276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:33:56.177 [2024-11-20 05:45:16.078311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:33:56.177 [2024-11-20 05:45:16.078363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:33:56.437 05:45:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:56.437 05:45:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:33:56.437 05:45:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:33:56.437 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:33:56.696 05:45:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:08.913 [2024-11-20 05:45:28.650847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:08.913 [2024-11-20 05:45:28.653842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:08.913 [2024-11-20 05:45:28.653937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.913 [2024-11-20 05:45:28.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.913 [2024-11-20 05:45:28.654073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:08.913 [2024-11-20 05:45:28.654121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.913 [2024-11-20 05:45:28.654180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.913 [2024-11-20 05:45:28.654235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:08.913 [2024-11-20 05:45:28.654272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.913 [2024-11-20 05:45:28.654320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.913 [2024-11-20 05:45:28.654376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:08.913 [2024-11-20 05:45:28.654410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:08.913 [2024-11-20 05:45:28.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:08.913 05:45:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:08.913 05:45:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:09.490 [2024-11-20 05:45:29.149891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:09.490 [2024-11-20 05:45:29.152601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:09.490 [2024-11-20 05:45:29.152694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.490 [2024-11-20 05:45:29.152752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.490 [2024-11-20 05:45:29.152826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:09.490 [2024-11-20 05:45:29.152864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.490 [2024-11-20 05:45:29.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.490 [2024-11-20 05:45:29.152993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:09.490 [2024-11-20 05:45:29.153046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.490 [2024-11-20 05:45:29.153129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.490 [2024-11-20 05:45:29.153187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:09.490 [2024-11-20 05:45:29.153225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:09.490 [2024-11-20 05:45:29.153296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:09.490 05:45:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.490 05:45:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:09.490 05:45:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:09.490 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:09.747 05:45:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.99 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.99 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.99 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.99 2 00:34:21.950 remove_attach_helper took 44.99s to complete (handling 2 nvme drive(s)) 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:34:21.950 05:45:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:34:21.950 05:45:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.515 05:45:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.515 05:45:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.515 [2024-11-20 05:45:47.730489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:28.515 [2024-11-20 05:45:47.732218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:47.732303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:47.732371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:47.732432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:47.732462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:47.732506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:47.732546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:47.732578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:47.732616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:47.732658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:47.732686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:47.732735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 05:45:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:34:28.515 05:45:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:28.515 [2024-11-20 05:45:48.129738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:28.515 [2024-11-20 05:45:48.132011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:48.132090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:48.132138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:48.132182] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:48.132207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:48.132238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:48.132287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:48.132310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:48.132379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 [2024-11-20 05:45:48.132424] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:28.515 [2024-11-20 05:45:48.132457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:28.515 [2024-11-20 05:45:48.132496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:28.515 05:45:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.515 05:45:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:28.515 05:45:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:28.515 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:28.516 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:28.516 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:28.516 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:28.775 05:45:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:40.981 05:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.981 05:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:40.981 05:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.981 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:40.982 [2024-11-20 05:46:00.705770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:40.982 [2024-11-20 05:46:00.708003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:40.982 [2024-11-20 05:46:00.708099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:40.982 [2024-11-20 05:46:00.708158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.982 [2024-11-20 05:46:00.708233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:40.982 [2024-11-20 05:46:00.708268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:40.982 [2024-11-20 05:46:00.708287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.982 [2024-11-20 05:46:00.708300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:40.982 [2024-11-20 05:46:00.708314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:40.982 [2024-11-20 05:46:00.708325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.982 [2024-11-20 05:46:00.708339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:40.982 [2024-11-20 05:46:00.708350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:40.982 [2024-11-20 05:46:00.708363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:40.982 05:46:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:40.982 05:46:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:40.982 05:46:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:40.982 05:46:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:41.240 [2024-11-20 05:46:01.105009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:41.240 [2024-11-20 05:46:01.107183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:41.240 [2024-11-20 05:46:01.107233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.240 [2024-11-20 05:46:01.107254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.240 [2024-11-20 05:46:01.107281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:41.240 [2024-11-20 05:46:01.107301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.240 [2024-11-20 05:46:01.107313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.240 [2024-11-20 05:46:01.107327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:41.240 [2024-11-20 05:46:01.107338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.240 [2024-11-20 05:46:01.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.240 [2024-11-20 05:46:01.107373] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:41.240 [2024-11-20 05:46:01.107385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.240 [2024-11-20 05:46:01.107395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:41.499 05:46:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.499 05:46:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:41.499 05:46:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:41.499 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:41.757 05:46:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:53.984 05:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.984 05:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:53.984 05:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:53.984 [2024-11-20 05:46:13.681001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:34:53.984 [2024-11-20 05:46:13.683296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:53.984 [2024-11-20 05:46:13.683400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.984 [2024-11-20 05:46:13.683455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.984 [2024-11-20 05:46:13.683524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:53.984 [2024-11-20 05:46:13.683560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.984 [2024-11-20 05:46:13.683614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.984 [2024-11-20 05:46:13.683680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:53.984 [2024-11-20 05:46:13.683728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.984 [2024-11-20 05:46:13.683778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.984 [2024-11-20 05:46:13.683851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:53.984 [2024-11-20 05:46:13.683892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.984 [2024-11-20 05:46:13.683942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:34:53.984 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:53.985 05:46:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:53.985 05:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:53.985 05:46:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:34:53.985 05:46:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:34:54.261 [2024-11-20 05:46:14.080241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:34:54.261 [2024-11-20 05:46:14.082200] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:54.261 [2024-11-20 05:46:14.082250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.261 [2024-11-20 05:46:14.082272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.261 [2024-11-20 05:46:14.082297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:54.261 [2024-11-20 05:46:14.082312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.261 [2024-11-20 05:46:14.082323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.261 [2024-11-20 05:46:14.082339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:54.261 [2024-11-20 05:46:14.082350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.261 [2024-11-20 05:46:14.082364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.261 [2024-11-20 05:46:14.082376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:34:54.261 [2024-11-20 05:46:14.082393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:34:54.261 [2024-11-20 05:46:14.082404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:34:54.520 05:46:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:54.520 05:46:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:34:54.520 05:46:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:54.520 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:34:54.779 05:46:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.02 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.02 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 00:35:07.015 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:35:07.015 05:46:26 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69404 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 69404 ']' 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 69404 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69404 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69404' 00:35:07.015 killing process with pid 69404 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@971 -- # kill 69404 00:35:07.015 05:46:26 sw_hotplug -- common/autotest_common.sh@976 -- # wait 69404 00:35:09.548 05:46:29 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:10.115 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:10.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:10.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:10.942 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.942 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:10.942 00:35:10.942 real 2m33.845s 00:35:10.942 user 1m54.367s 00:35:10.942 sys 0m19.454s 00:35:10.942 05:46:30 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:10.942 05:46:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:35:10.942 ************************************ 00:35:10.942 END TEST sw_hotplug 00:35:10.942 ************************************ 00:35:10.942 05:46:30 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:35:10.942 05:46:30 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:10.942 05:46:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:10.942 05:46:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:10.942 05:46:30 -- common/autotest_common.sh@10 -- # set +x 00:35:10.942 ************************************ 00:35:10.942 START TEST nvme_xnvme 00:35:10.942 ************************************ 00:35:10.942 05:46:30 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:35:11.201 * Looking for test storage... 00:35:11.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:35:11.201 05:46:30 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:11.201 05:46:30 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:35:11.201 05:46:30 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:11.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.201 --rc genhtml_branch_coverage=1 00:35:11.201 --rc genhtml_function_coverage=1 00:35:11.201 --rc genhtml_legend=1 00:35:11.201 --rc geninfo_all_blocks=1 00:35:11.201 --rc geninfo_unexecuted_blocks=1 00:35:11.201 00:35:11.201 ' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:11.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.201 --rc genhtml_branch_coverage=1 00:35:11.201 --rc genhtml_function_coverage=1 00:35:11.201 --rc genhtml_legend=1 00:35:11.201 --rc geninfo_all_blocks=1 00:35:11.201 --rc geninfo_unexecuted_blocks=1 00:35:11.201 00:35:11.201 ' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:11.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.201 --rc genhtml_branch_coverage=1 00:35:11.201 --rc genhtml_function_coverage=1 00:35:11.201 --rc genhtml_legend=1 00:35:11.201 --rc geninfo_all_blocks=1 00:35:11.201 --rc geninfo_unexecuted_blocks=1 00:35:11.201 00:35:11.201 ' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:11.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:11.201 --rc genhtml_branch_coverage=1 00:35:11.201 --rc genhtml_function_coverage=1 00:35:11.201 --rc genhtml_legend=1 00:35:11.201 --rc geninfo_all_blocks=1 00:35:11.201 --rc geninfo_unexecuted_blocks=1 00:35:11.201 00:35:11.201 ' 00:35:11.201 05:46:31 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:11.201 05:46:31 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:11.201 05:46:31 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.201 05:46:31 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.201 05:46:31 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.201 05:46:31 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:35:11.201 05:46:31 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:11.201 05:46:31 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:11.201 05:46:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:11.201 ************************************ 00:35:11.201 START TEST xnvme_to_malloc_dd_copy 00:35:11.201 ************************************ 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:35:11.201 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:11.459 05:46:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:11.459 { 00:35:11.459 "subsystems": [ 00:35:11.459 { 00:35:11.459 "subsystem": "bdev", 00:35:11.459 "config": [ 00:35:11.459 { 00:35:11.459 "params": { 00:35:11.459 "block_size": 512, 00:35:11.459 "num_blocks": 2097152, 00:35:11.459 "name": "malloc0" 00:35:11.459 }, 00:35:11.459 "method": "bdev_malloc_create" 00:35:11.459 }, 00:35:11.459 { 00:35:11.459 "params": { 00:35:11.459 "io_mechanism": "libaio", 00:35:11.459 "filename": "/dev/nullb0", 00:35:11.459 "name": "null0" 00:35:11.459 }, 00:35:11.459 "method": "bdev_xnvme_create" 00:35:11.459 }, 00:35:11.459 { 00:35:11.459 "method": "bdev_wait_for_examine" 00:35:11.459 } 00:35:11.459 ] 00:35:11.459 } 00:35:11.459 ] 00:35:11.459 } 00:35:11.459 [2024-11-20 05:46:31.221784] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:11.459 [2024-11-20 05:46:31.222051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:35:11.716 [2024-11-20 05:46:31.410240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.716 [2024-11-20 05:46:31.576826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.032  [2024-11-20T05:46:35.886Z] Copying: 212/1024 [MB] (212 MBps) [2024-11-20T05:46:36.824Z] Copying: 421/1024 [MB] (208 MBps) [2024-11-20T05:46:37.756Z] Copying: 630/1024 [MB] (209 MBps) [2024-11-20T05:46:38.692Z] Copying: 845/1024 [MB] (214 MBps) [2024-11-20T05:46:44.018Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:35:24.099 00:35:24.099 05:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:35:24.099 05:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:35:24.099 05:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:24.099 05:46:43 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:24.099 { 00:35:24.099 "subsystems": [ 00:35:24.099 { 00:35:24.099 "subsystem": "bdev", 00:35:24.099 "config": [ 00:35:24.099 { 00:35:24.099 "params": { 00:35:24.099 "block_size": 512, 00:35:24.099 "num_blocks": 2097152, 00:35:24.099 "name": "malloc0" 00:35:24.099 }, 00:35:24.099 "method": "bdev_malloc_create" 00:35:24.099 }, 00:35:24.099 { 00:35:24.099 "params": { 00:35:24.099 "io_mechanism": "libaio", 00:35:24.099 "filename": "/dev/nullb0", 00:35:24.099 "name": "null0" 00:35:24.099 }, 00:35:24.099 "method": "bdev_xnvme_create" 00:35:24.099 }, 00:35:24.099 { 00:35:24.099 "method": "bdev_wait_for_examine" 00:35:24.099 } 00:35:24.099 ] 00:35:24.099 } 00:35:24.099 ] 00:35:24.099 } 00:35:24.099 [2024-11-20 05:46:43.852851] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:24.099 [2024-11-20 05:46:43.853135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70916 ] 00:35:24.357 [2024-11-20 05:46:44.038317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.357 [2024-11-20 05:46:44.213156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.643  [2024-11-20T05:46:48.529Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-20T05:46:49.462Z] Copying: 422/1024 [MB] (209 MBps) [2024-11-20T05:46:50.396Z] Copying: 632/1024 [MB] (209 MBps) [2024-11-20T05:46:51.330Z] Copying: 841/1024 [MB] (208 MBps) [2024-11-20T05:46:56.598Z] Copying: 1024/1024 [MB] (average 211 MBps) 00:35:36.679 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:36.679 05:46:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:36.679 { 00:35:36.679 "subsystems": [ 00:35:36.679 { 00:35:36.679 "subsystem": "bdev", 00:35:36.679 "config": [ 00:35:36.679 { 00:35:36.679 "params": { 00:35:36.679 "block_size": 512, 00:35:36.679 "num_blocks": 2097152, 00:35:36.679 "name": "malloc0" 00:35:36.679 }, 00:35:36.679 "method": "bdev_malloc_create" 00:35:36.679 }, 00:35:36.679 { 00:35:36.679 "params": { 00:35:36.679 "io_mechanism": "io_uring", 00:35:36.679 "filename": "/dev/nullb0", 00:35:36.679 "name": "null0" 00:35:36.679 }, 00:35:36.679 "method": "bdev_xnvme_create" 00:35:36.679 }, 00:35:36.679 { 00:35:36.679 "method": "bdev_wait_for_examine" 00:35:36.679 } 00:35:36.679 ] 00:35:36.679 } 00:35:36.679 ] 00:35:36.679 } 00:35:36.679 [2024-11-20 05:46:56.473267] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:36.679 [2024-11-20 05:46:56.473570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71053 ] 00:35:36.938 [2024-11-20 05:46:56.658732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.938 [2024-11-20 05:46:56.821890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.227  [2024-11-20T05:47:01.081Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-20T05:47:02.018Z] Copying: 435/1024 [MB] (219 MBps) [2024-11-20T05:47:02.951Z] Copying: 655/1024 [MB] (220 MBps) [2024-11-20T05:47:03.518Z] Copying: 875/1024 [MB] (219 MBps) [2024-11-20T05:47:08.784Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:35:48.865 00:35:48.865 05:47:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:35:48.865 05:47:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:35:48.865 05:47:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:35:48.865 05:47:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:48.865 { 00:35:48.865 "subsystems": [ 00:35:48.865 { 00:35:48.865 "subsystem": "bdev", 00:35:48.865 "config": [ 00:35:48.865 { 00:35:48.865 "params": { 00:35:48.865 "block_size": 512, 00:35:48.865 "num_blocks": 2097152, 00:35:48.865 "name": "malloc0" 00:35:48.865 }, 00:35:48.865 "method": "bdev_malloc_create" 00:35:48.865 }, 00:35:48.865 { 00:35:48.865 "params": { 00:35:48.865 "io_mechanism": "io_uring", 00:35:48.865 "filename": "/dev/nullb0", 00:35:48.865 "name": "null0" 00:35:48.865 }, 00:35:48.865 "method": "bdev_xnvme_create" 00:35:48.865 }, 00:35:48.865 { 00:35:48.865 "method": "bdev_wait_for_examine" 00:35:48.865 } 00:35:48.865 ] 00:35:48.865 } 00:35:48.865 ] 00:35:48.865 } 00:35:48.865 [2024-11-20 05:47:07.872835] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:48.865 [2024-11-20 05:47:07.872995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71179 ] 00:35:48.865 [2024-11-20 05:47:08.054673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.865 [2024-11-20 05:47:08.197396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.401  [2024-11-20T05:47:12.256Z] Copying: 270/1024 [MB] (270 MBps) [2024-11-20T05:47:13.193Z] Copying: 541/1024 [MB] (271 MBps) [2024-11-20T05:47:14.130Z] Copying: 778/1024 [MB] (236 MBps) [2024-11-20T05:47:14.130Z] Copying: 1006/1024 [MB] (228 MBps) [2024-11-20T05:47:19.397Z] Copying: 1024/1024 [MB] (average 251 MBps) 00:35:59.478 00:35:59.478 05:47:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:35:59.478 05:47:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:35:59.478 00:35:59.478 real 0m47.596s 00:35:59.478 user 0m41.840s 00:35:59.478 sys 0m5.190s 00:35:59.478 05:47:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:59.478 05:47:18 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:35:59.478 ************************************ 00:35:59.478 END TEST xnvme_to_malloc_dd_copy 00:35:59.478 ************************************ 00:35:59.478 05:47:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:35:59.478 05:47:18 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:59.478 05:47:18 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:59.478 05:47:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:59.478 ************************************ 00:35:59.478 START TEST xnvme_bdevperf 00:35:59.478 ************************************ 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:35:59.478 05:47:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:59.478 { 00:35:59.478 "subsystems": [ 00:35:59.478 { 00:35:59.478 "subsystem": "bdev", 00:35:59.478 "config": [ 00:35:59.478 { 00:35:59.478 "params": { 00:35:59.478 "io_mechanism": "libaio", 00:35:59.478 "filename": "/dev/nullb0", 00:35:59.478 "name": "null0" 00:35:59.478 }, 00:35:59.478 "method": "bdev_xnvme_create" 00:35:59.478 }, 00:35:59.478 { 00:35:59.478 "method": "bdev_wait_for_examine" 00:35:59.478 } 00:35:59.478 ] 00:35:59.478 } 00:35:59.478 ] 00:35:59.478 } 00:35:59.478 [2024-11-20 05:47:18.881984] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:35:59.478 [2024-11-20 05:47:18.882255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:35:59.478 [2024-11-20 05:47:19.066229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.478 [2024-11-20 05:47:19.208673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.737 Running I/O for 5 seconds... 00:36:01.679 175296.00 IOPS, 684.75 MiB/s [2024-11-20T05:47:22.976Z] 175008.00 IOPS, 683.62 MiB/s [2024-11-20T05:47:23.912Z] 172864.00 IOPS, 675.25 MiB/s [2024-11-20T05:47:24.879Z] 173232.00 IOPS, 676.69 MiB/s 00:36:04.960 Latency(us) 00:36:04.960 [2024-11-20T05:47:24.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.960 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:04.960 null0 : 5.00 173671.23 678.40 0.00 0.00 366.06 126.99 2160.68 00:36:04.960 [2024-11-20T05:47:24.879Z] =================================================================================================================== 00:36:04.960 [2024-11-20T05:47:24.879Z] Total : 173671.23 678.40 0.00 0.00 366.06 126.99 2160.68 00:36:06.356 05:47:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:36:06.356 05:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:36:06.356 05:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:36:06.356 05:47:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:36:06.356 05:47:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:36:06.356 05:47:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:06.356 { 00:36:06.356 "subsystems": [ 00:36:06.356 { 00:36:06.356 "subsystem": "bdev", 00:36:06.356 "config": [ 00:36:06.356 { 00:36:06.356 "params": { 00:36:06.356 "io_mechanism": "io_uring", 00:36:06.356 "filename": "/dev/nullb0", 00:36:06.356 "name": "null0" 00:36:06.356 }, 00:36:06.356 "method": "bdev_xnvme_create" 00:36:06.356 }, 00:36:06.356 { 00:36:06.356 "method": "bdev_wait_for_examine" 00:36:06.356 } 00:36:06.356 ] 00:36:06.356 } 00:36:06.356 ] 00:36:06.356 } 00:36:06.356 [2024-11-20 05:47:26.113431] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:06.356 [2024-11-20 05:47:26.113710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71407 ] 00:36:06.615 [2024-11-20 05:47:26.299612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.615 [2024-11-20 05:47:26.458390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.181 Running I/O for 5 seconds... 00:36:09.088 179840.00 IOPS, 702.50 MiB/s [2024-11-20T05:47:29.941Z] 180256.00 IOPS, 704.12 MiB/s [2024-11-20T05:47:31.316Z] 180714.67 IOPS, 705.92 MiB/s [2024-11-20T05:47:32.252Z] 180848.00 IOPS, 706.44 MiB/s 00:36:12.333 Latency(us) 00:36:12.333 [2024-11-20T05:47:32.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.333 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:36:12.333 null0 : 5.00 180607.01 705.50 0.00 0.00 351.52 173.50 2031.90 00:36:12.333 [2024-11-20T05:47:32.252Z] =================================================================================================================== 00:36:12.333 [2024-11-20T05:47:32.252Z] Total : 180607.01 705.50 0.00 0.00 351.52 173.50 2031.90 00:36:13.709 05:47:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:36:13.709 05:47:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:36:13.709 00:36:13.709 real 0m14.632s 00:36:13.709 user 0m11.715s 00:36:13.709 sys 0m2.703s 00:36:13.709 05:47:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:13.709 ************************************ 00:36:13.709 END TEST xnvme_bdevperf 00:36:13.709 ************************************ 00:36:13.709 05:47:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:13.709 ************************************ 00:36:13.709 END TEST nvme_xnvme 00:36:13.709 ************************************ 00:36:13.709 00:36:13.709 real 1m2.593s 00:36:13.709 user 0m53.724s 00:36:13.709 sys 0m8.105s 00:36:13.709 05:47:33 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:13.709 05:47:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:13.709 05:47:33 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:36:13.709 05:47:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:13.709 05:47:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:13.709 05:47:33 -- common/autotest_common.sh@10 -- # set +x 00:36:13.709 ************************************ 00:36:13.709 START TEST blockdev_xnvme 00:36:13.709 ************************************ 00:36:13.709 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:36:13.709 * Looking for test storage... 00:36:13.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:36:13.709 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:13.709 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:36:13.709 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:13.967 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:36:13.967 05:47:33 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:13.968 05:47:33 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.968 --rc genhtml_branch_coverage=1 00:36:13.968 --rc genhtml_function_coverage=1 00:36:13.968 --rc genhtml_legend=1 00:36:13.968 --rc geninfo_all_blocks=1 00:36:13.968 --rc geninfo_unexecuted_blocks=1 00:36:13.968 00:36:13.968 ' 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.968 --rc genhtml_branch_coverage=1 00:36:13.968 --rc genhtml_function_coverage=1 00:36:13.968 --rc genhtml_legend=1 00:36:13.968 --rc geninfo_all_blocks=1 00:36:13.968 --rc geninfo_unexecuted_blocks=1 00:36:13.968 00:36:13.968 ' 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.968 --rc genhtml_branch_coverage=1 00:36:13.968 --rc genhtml_function_coverage=1 00:36:13.968 --rc genhtml_legend=1 00:36:13.968 --rc geninfo_all_blocks=1 00:36:13.968 --rc geninfo_unexecuted_blocks=1 00:36:13.968 00:36:13.968 ' 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:13.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:13.968 --rc genhtml_branch_coverage=1 00:36:13.968 --rc genhtml_function_coverage=1 00:36:13.968 --rc genhtml_legend=1 00:36:13.968 --rc geninfo_all_blocks=1 00:36:13.968 --rc geninfo_unexecuted_blocks=1 00:36:13.968 00:36:13.968 ' 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71560 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:36:13.968 05:47:33 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71560 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 71560 ']' 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:13.968 05:47:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:13.968 [2024-11-20 05:47:33.856552] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:13.968 [2024-11-20 05:47:33.856862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71560 ] 00:36:14.226 [2024-11-20 05:47:34.041776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.484 [2024-11-20 05:47:34.199093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.861 05:47:35 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:15.861 05:47:35 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:36:15.861 05:47:35 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:36:15.861 05:47:35 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:36:15.861 05:47:35 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:36:15.861 05:47:35 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:36:15.861 05:47:35 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:16.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:16.382 Waiting for block devices as requested 00:36:16.382 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:16.641 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:16.641 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:16.641 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:21.915 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:21.915 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:36:21.915 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:36:21.916 nvme0n1 00:36:21.916 nvme1n1 00:36:21.916 nvme2n1 00:36:21.916 nvme2n2 00:36:21.916 nvme2n3 00:36:21.916 nvme3n1 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:21.916 05:47:41 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:36:21.916 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:36:21.917 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "94bfe668-a88a-4920-b01f-f4f9c671c981"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "94bfe668-a88a-4920-b01f-f4f9c671c981",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "bd34767a-4fe4-42b8-aac5-95c0695c8a02"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bd34767a-4fe4-42b8-aac5-95c0695c8a02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ea6856d9-e3fd-44be-8049-0be362769960"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea6856d9-e3fd-44be-8049-0be362769960",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4bdcb917-cf94-43e2-889f-3eeecc65dcaf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bdcb917-cf94-43e2-889f-3eeecc65dcaf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "46e407db-8d26-49f2-905f-c64c4588db7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46e407db-8d26-49f2-905f-c64c4588db7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8fb48701-9fad-4101-b645-6c8ae7d30eb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8fb48701-9fad-4101-b645-6c8ae7d30eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:36:21.917 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:36:21.917 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:36:21.917 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:36:21.917 05:47:41 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71560 00:36:21.917 05:47:41 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 71560 ']' 00:36:21.917 05:47:41 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 71560 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71560 00:36:22.175 killing process with pid 71560 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71560' 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 71560 00:36:22.175 05:47:41 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 71560 00:36:25.460 05:47:44 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:25.460 05:47:44 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:36:25.460 05:47:44 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:36:25.460 05:47:44 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:25.460 05:47:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:25.460 ************************************ 00:36:25.460 START TEST bdev_hello_world 00:36:25.460 ************************************ 00:36:25.460 05:47:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:36:25.460 [2024-11-20 05:47:45.068333] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:25.460 [2024-11-20 05:47:45.068498] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71946 ] 00:36:25.460 [2024-11-20 05:47:45.254048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.719 [2024-11-20 05:47:45.416475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.286 [2024-11-20 05:47:45.969001] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:36:26.286 [2024-11-20 05:47:45.969064] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:36:26.286 [2024-11-20 05:47:45.969098] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:36:26.286 [2024-11-20 05:47:45.971493] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:36:26.286 [2024-11-20 05:47:45.971935] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:36:26.286 [2024-11-20 05:47:45.971964] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:36:26.286 [2024-11-20 05:47:45.972181] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:36:26.286 00:36:26.286 [2024-11-20 05:47:45.972207] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:36:27.664 00:36:27.664 real 0m2.448s 00:36:27.664 user 0m1.981s 00:36:27.664 sys 0m0.347s 00:36:27.664 ************************************ 00:36:27.664 END TEST bdev_hello_world 00:36:27.664 ************************************ 00:36:27.664 05:47:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:27.664 05:47:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:36:27.664 05:47:47 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:36:27.664 05:47:47 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:27.664 05:47:47 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:27.664 05:47:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:27.664 ************************************ 00:36:27.664 START TEST bdev_bounds 00:36:27.664 ************************************ 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71989 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:36:27.664 Process bdevio pid: 71989 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71989' 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71989 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71989 ']' 00:36:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:27.664 05:47:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:36:27.664 [2024-11-20 05:47:47.570994] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:27.664 [2024-11-20 05:47:47.571276] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71989 ] 00:36:27.923 [2024-11-20 05:47:47.755428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:28.182 [2024-11-20 05:47:47.922958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.182 [2024-11-20 05:47:47.923138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.182 [2024-11-20 05:47:47.923171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.749 05:47:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:28.749 05:47:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:36:28.749 05:47:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:36:28.749 I/O targets: 00:36:28.749 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:36:28.749 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:36:28.749 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:36:28.749 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:36:28.749 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:36:28.749 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:36:28.749 00:36:28.749 00:36:28.749 CUnit - A unit testing framework for C - Version 2.1-3 00:36:28.749 http://cunit.sourceforge.net/ 00:36:28.749 00:36:28.749 00:36:28.749 Suite: bdevio tests on: nvme3n1 00:36:28.749 Test: blockdev write read block ...passed 00:36:28.749 Test: blockdev write zeroes read block ...passed 00:36:28.749 Test: blockdev write zeroes read no split ...passed 00:36:29.006 Test: blockdev write zeroes read split ...passed 00:36:29.006 Test: blockdev write zeroes read split partial ...passed 00:36:29.006 Test: blockdev reset ...passed 00:36:29.006 Test: blockdev write read 8 blocks ...passed 00:36:29.006 Test: blockdev write read size > 128k ...passed 00:36:29.006 Test: blockdev write read invalid size ...passed 00:36:29.006 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.006 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.006 Test: blockdev write read max offset ...passed 00:36:29.006 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.006 Test: blockdev writev readv 8 blocks ...passed 00:36:29.006 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.006 Test: blockdev writev readv block ...passed 00:36:29.006 Test: blockdev writev readv size > 128k ...passed 00:36:29.006 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.006 Test: blockdev comparev and writev ...passed 00:36:29.006 Test: blockdev nvme passthru rw ...passed 00:36:29.006 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.006 Test: blockdev nvme admin passthru ...passed 00:36:29.006 Test: blockdev copy ...passed 00:36:29.006 Suite: bdevio tests on: nvme2n3 00:36:29.006 Test: blockdev write read block ...passed 00:36:29.006 Test: blockdev write zeroes read block ...passed 00:36:29.006 Test: blockdev write zeroes read no split ...passed 00:36:29.006 Test: blockdev write zeroes read split ...passed 00:36:29.006 Test: blockdev write zeroes read split partial ...passed 00:36:29.006 Test: blockdev reset ...passed 00:36:29.006 Test: blockdev write read 8 blocks ...passed 00:36:29.006 Test: blockdev write read size > 128k ...passed 00:36:29.006 Test: blockdev write read invalid size ...passed 00:36:29.006 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.006 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.006 Test: blockdev write read max offset ...passed 00:36:29.006 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.006 Test: blockdev writev readv 8 blocks ...passed 00:36:29.006 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.006 Test: blockdev writev readv block ...passed 00:36:29.006 Test: blockdev writev readv size > 128k ...passed 00:36:29.006 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.006 Test: blockdev comparev and writev ...passed 00:36:29.006 Test: blockdev nvme passthru rw ...passed 00:36:29.006 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.006 Test: blockdev nvme admin passthru ...passed 00:36:29.006 Test: blockdev copy ...passed 00:36:29.006 Suite: bdevio tests on: nvme2n2 00:36:29.006 Test: blockdev write read block ...passed 00:36:29.006 Test: blockdev write zeroes read block ...passed 00:36:29.006 Test: blockdev write zeroes read no split ...passed 00:36:29.006 Test: blockdev write zeroes read split ...passed 00:36:29.265 Test: blockdev write zeroes read split partial ...passed 00:36:29.266 Test: blockdev reset ...passed 00:36:29.266 Test: blockdev write read 8 blocks ...passed 00:36:29.266 Test: blockdev write read size > 128k ...passed 00:36:29.266 Test: blockdev write read invalid size ...passed 00:36:29.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.266 Test: blockdev write read max offset ...passed 00:36:29.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.266 Test: blockdev writev readv 8 blocks ...passed 00:36:29.266 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.266 Test: blockdev writev readv block ...passed 00:36:29.266 Test: blockdev writev readv size > 128k ...passed 00:36:29.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.266 Test: blockdev comparev and writev ...passed 00:36:29.266 Test: blockdev nvme passthru rw ...passed 00:36:29.266 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.266 Test: blockdev nvme admin passthru ...passed 00:36:29.266 Test: blockdev copy ...passed 00:36:29.266 Suite: bdevio tests on: nvme2n1 00:36:29.266 Test: blockdev write read block ...passed 00:36:29.266 Test: blockdev write zeroes read block ...passed 00:36:29.266 Test: blockdev write zeroes read no split ...passed 00:36:29.266 Test: blockdev write zeroes read split ...passed 00:36:29.266 Test: blockdev write zeroes read split partial ...passed 00:36:29.266 Test: blockdev reset ...passed 00:36:29.266 Test: blockdev write read 8 blocks ...passed 00:36:29.266 Test: blockdev write read size > 128k ...passed 00:36:29.266 Test: blockdev write read invalid size ...passed 00:36:29.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.266 Test: blockdev write read max offset ...passed 00:36:29.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.266 Test: blockdev writev readv 8 blocks ...passed 00:36:29.266 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.266 Test: blockdev writev readv block ...passed 00:36:29.266 Test: blockdev writev readv size > 128k ...passed 00:36:29.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.266 Test: blockdev comparev and writev ...passed 00:36:29.266 Test: blockdev nvme passthru rw ...passed 00:36:29.266 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.266 Test: blockdev nvme admin passthru ...passed 00:36:29.266 Test: blockdev copy ...passed 00:36:29.266 Suite: bdevio tests on: nvme1n1 00:36:29.266 Test: blockdev write read block ...passed 00:36:29.266 Test: blockdev write zeroes read block ...passed 00:36:29.266 Test: blockdev write zeroes read no split ...passed 00:36:29.266 Test: blockdev write zeroes read split ...passed 00:36:29.266 Test: blockdev write zeroes read split partial ...passed 00:36:29.266 Test: blockdev reset ...passed 00:36:29.266 Test: blockdev write read 8 blocks ...passed 00:36:29.266 Test: blockdev write read size > 128k ...passed 00:36:29.266 Test: blockdev write read invalid size ...passed 00:36:29.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.266 Test: blockdev write read max offset ...passed 00:36:29.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.266 Test: blockdev writev readv 8 blocks ...passed 00:36:29.266 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.266 Test: blockdev writev readv block ...passed 00:36:29.266 Test: blockdev writev readv size > 128k ...passed 00:36:29.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.266 Test: blockdev comparev and writev ...passed 00:36:29.266 Test: blockdev nvme passthru rw ...passed 00:36:29.266 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.266 Test: blockdev nvme admin passthru ...passed 00:36:29.266 Test: blockdev copy ...passed 00:36:29.266 Suite: bdevio tests on: nvme0n1 00:36:29.266 Test: blockdev write read block ...passed 00:36:29.266 Test: blockdev write zeroes read block ...passed 00:36:29.266 Test: blockdev write zeroes read no split ...passed 00:36:29.525 Test: blockdev write zeroes read split ...passed 00:36:29.525 Test: blockdev write zeroes read split partial ...passed 00:36:29.525 Test: blockdev reset ...passed 00:36:29.525 Test: blockdev write read 8 blocks ...passed 00:36:29.525 Test: blockdev write read size > 128k ...passed 00:36:29.525 Test: blockdev write read invalid size ...passed 00:36:29.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:29.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:29.525 Test: blockdev write read max offset ...passed 00:36:29.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:29.525 Test: blockdev writev readv 8 blocks ...passed 00:36:29.525 Test: blockdev writev readv 30 x 1block ...passed 00:36:29.525 Test: blockdev writev readv block ...passed 00:36:29.525 Test: blockdev writev readv size > 128k ...passed 00:36:29.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:29.525 Test: blockdev comparev and writev ...passed 00:36:29.525 Test: blockdev nvme passthru rw ...passed 00:36:29.525 Test: blockdev nvme passthru vendor specific ...passed 00:36:29.525 Test: blockdev nvme admin passthru ...passed 00:36:29.525 Test: blockdev copy ...passed 00:36:29.525 00:36:29.525 Run Summary: Type Total Ran Passed Failed Inactive 00:36:29.525 suites 6 6 n/a 0 0 00:36:29.525 tests 138 138 138 0 0 00:36:29.525 asserts 780 780 780 0 n/a 00:36:29.525 00:36:29.525 Elapsed time = 1.711 seconds 00:36:29.525 0 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71989 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71989 ']' 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71989 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71989 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71989' 00:36:29.525 killing process with pid 71989 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71989 00:36:29.525 05:47:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71989 00:36:30.903 05:47:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:36:30.903 00:36:30.903 real 0m3.349s 00:36:30.903 user 0m8.342s 00:36:30.903 sys 0m0.544s 00:36:30.903 05:47:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.903 05:47:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:36:30.903 ************************************ 00:36:30.903 END TEST bdev_bounds 00:36:30.903 ************************************ 00:36:31.161 05:47:50 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:36:31.161 05:47:50 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:36:31.161 05:47:50 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:31.161 05:47:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:31.161 ************************************ 00:36:31.161 START TEST bdev_nbd 00:36:31.161 ************************************ 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72066 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72066 /var/tmp/spdk-nbd.sock 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 72066 ']' 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:36:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:31.162 05:47:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:31.162 [2024-11-20 05:47:51.008218] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:31.162 [2024-11-20 05:47:51.008491] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.421 [2024-11-20 05:47:51.193389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.679 [2024-11-20 05:47:51.360046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:32.245 05:47:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:36:32.503 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:36:32.503 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:36:32.503 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:32.504 1+0 records in 00:36:32.504 1+0 records out 00:36:32.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000786783 s, 5.2 MB/s 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:32.504 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:32.763 1+0 records in 00:36:32.763 1+0 records out 00:36:32.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544179 s, 7.5 MB/s 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:32.763 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:33.022 1+0 records in 00:36:33.022 1+0 records out 00:36:33.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723181 s, 5.7 MB/s 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:33.022 05:47:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:33.279 1+0 records in 00:36:33.279 1+0 records out 00:36:33.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813635 s, 5.0 MB/s 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:33.279 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:36:33.536 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:33.537 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:33.796 1+0 records in 00:36:33.796 1+0 records out 00:36:33.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712285 s, 5.8 MB/s 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:33.796 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:36:34.055 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:36:34.055 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:36:34.055 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:36:34.055 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:36:34.055 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:34.056 1+0 records in 00:36:34.056 1+0 records out 00:36:34.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827429 s, 5.0 MB/s 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:36:34.056 05:47:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd0", 00:36:34.315 "bdev_name": "nvme0n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd1", 00:36:34.315 "bdev_name": "nvme1n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd2", 00:36:34.315 "bdev_name": "nvme2n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd3", 00:36:34.315 "bdev_name": "nvme2n2" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd4", 00:36:34.315 "bdev_name": "nvme2n3" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd5", 00:36:34.315 "bdev_name": "nvme3n1" 00:36:34.315 } 00:36:34.315 ]' 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd0", 00:36:34.315 "bdev_name": "nvme0n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd1", 00:36:34.315 "bdev_name": "nvme1n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd2", 00:36:34.315 "bdev_name": "nvme2n1" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd3", 00:36:34.315 "bdev_name": "nvme2n2" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd4", 00:36:34.315 "bdev_name": "nvme2n3" 00:36:34.315 }, 00:36:34.315 { 00:36:34.315 "nbd_device": "/dev/nbd5", 00:36:34.315 "bdev_name": "nvme3n1" 00:36:34.315 } 00:36:34.315 ]' 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:34.315 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:34.575 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:34.834 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.093 05:47:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.352 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:36:35.611 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:36:35.611 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:36:35.611 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:36:35.611 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.612 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:35.871 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:36.130 05:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:36:36.389 /dev/nbd0 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:36.389 1+0 records in 00:36:36.389 1+0 records out 00:36:36.389 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796608 s, 5.1 MB/s 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:36.389 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:36:36.647 /dev/nbd1 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:36.647 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:36.906 1+0 records in 00:36:36.906 1+0 records out 00:36:36.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663561 s, 6.2 MB/s 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:36.906 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:36:36.906 /dev/nbd10 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:37.166 1+0 records in 00:36:37.166 1+0 records out 00:36:37.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767689 s, 5.3 MB/s 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:37.166 05:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:36:37.426 /dev/nbd11 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:37.426 1+0 records in 00:36:37.426 1+0 records out 00:36:37.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689104 s, 5.9 MB/s 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:37.426 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:36:37.685 /dev/nbd12 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:37.685 1+0 records in 00:36:37.685 1+0 records out 00:36:37.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790388 s, 5.2 MB/s 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:37.685 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:36:37.944 /dev/nbd13 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:37.944 1+0 records in 00:36:37.944 1+0 records out 00:36:37.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866366 s, 4.7 MB/s 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:37.944 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:38.203 05:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd0", 00:36:38.203 "bdev_name": "nvme0n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd1", 00:36:38.203 "bdev_name": "nvme1n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd10", 00:36:38.203 "bdev_name": "nvme2n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd11", 00:36:38.203 "bdev_name": "nvme2n2" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd12", 00:36:38.203 "bdev_name": "nvme2n3" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd13", 00:36:38.203 "bdev_name": "nvme3n1" 00:36:38.203 } 00:36:38.203 ]' 00:36:38.203 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd0", 00:36:38.203 "bdev_name": "nvme0n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd1", 00:36:38.203 "bdev_name": "nvme1n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd10", 00:36:38.203 "bdev_name": "nvme2n1" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd11", 00:36:38.203 "bdev_name": "nvme2n2" 00:36:38.203 }, 00:36:38.203 { 00:36:38.203 "nbd_device": "/dev/nbd12", 00:36:38.203 "bdev_name": "nvme2n3" 00:36:38.203 }, 00:36:38.203 { 00:36:38.204 "nbd_device": "/dev/nbd13", 00:36:38.204 "bdev_name": "nvme3n1" 00:36:38.204 } 00:36:38.204 ]' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:36:38.204 /dev/nbd1 00:36:38.204 /dev/nbd10 00:36:38.204 /dev/nbd11 00:36:38.204 /dev/nbd12 00:36:38.204 /dev/nbd13' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:36:38.204 /dev/nbd1 00:36:38.204 /dev/nbd10 00:36:38.204 /dev/nbd11 00:36:38.204 /dev/nbd12 00:36:38.204 /dev/nbd13' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:36:38.204 256+0 records in 00:36:38.204 256+0 records out 00:36:38.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142606 s, 73.5 MB/s 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.204 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:36:38.500 256+0 records in 00:36:38.500 256+0 records out 00:36:38.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0981445 s, 10.7 MB/s 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:36:38.500 256+0 records in 00:36:38.500 256+0 records out 00:36:38.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120757 s, 8.7 MB/s 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:36:38.500 256+0 records in 00:36:38.500 256+0 records out 00:36:38.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0982924 s, 10.7 MB/s 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.500 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:36:38.759 256+0 records in 00:36:38.759 256+0 records out 00:36:38.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102019 s, 10.3 MB/s 00:36:38.759 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.759 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:36:38.759 256+0 records in 00:36:38.759 256+0 records out 00:36:38.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104379 s, 10.0 MB/s 00:36:38.759 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:36:38.759 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:36:39.018 256+0 records in 00:36:39.018 256+0 records out 00:36:39.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10426 s, 10.1 MB/s 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:39.018 05:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:39.277 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:39.535 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:39.793 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:40.052 05:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:40.311 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:40.569 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:40.827 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:40.828 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:36:40.828 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:36:41.087 malloc_lvol_verify 00:36:41.087 05:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:36:41.347 23771451-fcb1-4c5e-ad75-1625e2578f85 00:36:41.347 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:36:41.636 97b31e0c-3326-4050-a9f3-f24d48ec3e4a 00:36:41.636 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:36:41.894 /dev/nbd0 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:36:41.894 mke2fs 1.47.0 (5-Feb-2023) 00:36:41.894 Discarding device blocks: 0/4096 done 00:36:41.894 Creating filesystem with 4096 1k blocks and 1024 inodes 00:36:41.894 00:36:41.894 Allocating group tables: 0/1 done 00:36:41.894 Writing inode tables: 0/1 done 00:36:41.894 Creating journal (1024 blocks): done 00:36:41.894 Writing superblocks and filesystem accounting information: 0/1 done 00:36:41.894 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:41.894 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72066 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 72066 ']' 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 72066 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:42.154 05:48:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72066 00:36:42.154 killing process with pid 72066 00:36:42.154 05:48:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:42.154 05:48:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:42.154 05:48:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72066' 00:36:42.154 05:48:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 72066 00:36:42.154 05:48:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 72066 00:36:44.059 05:48:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:36:44.059 00:36:44.059 real 0m12.667s 00:36:44.059 user 0m17.011s 00:36:44.059 sys 0m4.903s 00:36:44.059 05:48:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:44.059 ************************************ 00:36:44.059 05:48:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:36:44.059 END TEST bdev_nbd 00:36:44.059 ************************************ 00:36:44.059 05:48:03 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:36:44.059 05:48:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:36:44.059 05:48:03 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:36:44.059 05:48:03 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:36:44.059 05:48:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:44.059 05:48:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:44.059 05:48:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:44.059 ************************************ 00:36:44.059 START TEST bdev_fio 00:36:44.059 ************************************ 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:36:44.059 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:44.059 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:44.060 ************************************ 00:36:44.060 START TEST bdev_fio_rw_verify 00:36:44.060 ************************************ 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:44.060 05:48:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:36:44.319 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:36:44.319 fio-3.35 00:36:44.319 Starting 6 threads 00:36:56.524 00:36:56.524 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72488: Wed Nov 20 05:48:15 2024 00:36:56.524 read: IOPS=31.3k, BW=122MiB/s (128MB/s)(1225MiB/10001msec) 00:36:56.524 slat (usec): min=2, max=2036, avg=10.43, stdev= 8.90 00:36:56.524 clat (usec): min=93, max=5239, avg=474.05, stdev=243.61 00:36:56.524 lat (usec): min=107, max=5248, avg=484.48, stdev=245.47 00:36:56.524 clat percentiles (usec): 00:36:56.524 | 50.000th=[ 429], 99.000th=[ 1172], 99.900th=[ 1893], 99.990th=[ 3818], 00:36:56.524 | 99.999th=[ 4113] 00:36:56.524 write: IOPS=31.7k, BW=124MiB/s (130MB/s)(1240MiB/10001msec); 0 zone resets 00:36:56.524 slat (usec): min=10, max=1629, avg=41.11, stdev=46.78 00:36:56.524 clat (usec): min=78, max=3727, avg=656.25, stdev=300.25 00:36:56.524 lat (usec): min=99, max=3803, avg=697.37, stdev=310.33 00:36:56.524 clat percentiles (usec): 00:36:56.524 | 50.000th=[ 619], 99.000th=[ 1516], 99.900th=[ 1958], 99.990th=[ 2409], 00:36:56.524 | 99.999th=[ 3458] 00:36:56.524 bw ( KiB/s): min=100391, max=153144, per=99.70%, avg=126573.84, stdev=2577.76, samples=114 00:36:56.524 iops : min=25097, max=38286, avg=31643.05, stdev=644.43, samples=114 00:36:56.524 lat (usec) : 100=0.01%, 250=11.08%, 500=36.73%, 750=28.70%, 1000=15.56% 00:36:56.524 lat (msec) : 2=7.86%, 4=0.08%, 10=0.01% 00:36:56.524 cpu : usr=54.10%, sys=26.88%, ctx=8257, majf=0, minf=26347 00:36:56.524 IO depths : 1=11.7%, 2=24.0%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:56.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.524 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:56.524 issued rwts: total=313473,317419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:56.524 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:56.524 00:36:56.524 Run status group 0 (all jobs): 00:36:56.524 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=1225MiB (1284MB), run=10001-10001msec 00:36:56.524 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=1240MiB (1300MB), run=10001-10001msec 00:36:56.783 ----------------------------------------------------- 00:36:56.783 Suppressions used: 00:36:56.783 count bytes template 00:36:56.783 6 48 /usr/src/fio/parse.c 00:36:56.783 3721 357216 /usr/src/fio/iolog.c 00:36:56.783 1 8 libtcmalloc_minimal.so 00:36:56.783 1 904 libcrypto.so 00:36:56.783 ----------------------------------------------------- 00:36:56.783 00:36:56.783 00:36:56.783 real 0m12.865s 00:36:56.783 user 0m34.757s 00:36:56.783 sys 0m16.634s 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:36:56.783 ************************************ 00:36:56.783 END TEST bdev_fio_rw_verify 00:36:56.783 ************************************ 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:36:56.783 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:36:56.784 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "94bfe668-a88a-4920-b01f-f4f9c671c981"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "94bfe668-a88a-4920-b01f-f4f9c671c981",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "bd34767a-4fe4-42b8-aac5-95c0695c8a02"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bd34767a-4fe4-42b8-aac5-95c0695c8a02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ea6856d9-e3fd-44be-8049-0be362769960"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ea6856d9-e3fd-44be-8049-0be362769960",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4bdcb917-cf94-43e2-889f-3eeecc65dcaf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bdcb917-cf94-43e2-889f-3eeecc65dcaf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "46e407db-8d26-49f2-905f-c64c4588db7f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "46e407db-8d26-49f2-905f-c64c4588db7f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "8fb48701-9fad-4101-b645-6c8ae7d30eb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8fb48701-9fad-4101-b645-6c8ae7d30eb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:36:57.043 /home/vagrant/spdk_repo/spdk 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:36:57.043 00:36:57.043 real 0m13.107s 00:36:57.043 user 0m34.868s 00:36:57.043 sys 0m16.771s 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:57.043 05:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:36:57.043 ************************************ 00:36:57.043 END TEST bdev_fio 00:36:57.043 ************************************ 00:36:57.043 05:48:16 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:36:57.043 05:48:16 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:57.043 05:48:16 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:36:57.043 05:48:16 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:57.043 05:48:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:36:57.043 ************************************ 00:36:57.043 START TEST bdev_verify 00:36:57.043 ************************************ 00:36:57.043 05:48:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:36:57.043 [2024-11-20 05:48:16.892387] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:36:57.043 [2024-11-20 05:48:16.892524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72668 ] 00:36:57.302 [2024-11-20 05:48:17.072708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:57.302 [2024-11-20 05:48:17.217372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.302 [2024-11-20 05:48:17.217438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.866 Running I/O for 5 seconds... 00:37:00.229 23584.00 IOPS, 92.12 MiB/s [2024-11-20T05:48:21.083Z] 23824.00 IOPS, 93.06 MiB/s [2024-11-20T05:48:22.015Z] 23456.00 IOPS, 91.62 MiB/s [2024-11-20T05:48:22.952Z] 23560.00 IOPS, 92.03 MiB/s [2024-11-20T05:48:22.952Z] 23628.80 IOPS, 92.30 MiB/s 00:37:03.033 Latency(us) 00:37:03.033 [2024-11-20T05:48:22.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.033 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0xa0000 00:37:03.033 nvme0n1 : 5.04 1803.81 7.05 0.00 0.00 70842.93 7726.95 67310.34 00:37:03.033 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0xa0000 length 0xa0000 00:37:03.033 nvme0n1 : 5.04 1626.40 6.35 0.00 0.00 78575.09 13679.57 73262.95 00:37:03.033 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0xbd0bd 00:37:03.033 nvme1n1 : 5.07 2639.42 10.31 0.00 0.00 48218.21 5866.76 57236.68 00:37:03.033 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:37:03.033 nvme1n1 : 5.05 2606.79 10.18 0.00 0.00 48847.22 4636.17 62731.40 00:37:03.033 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0x80000 00:37:03.033 nvme2n1 : 5.05 1852.04 7.23 0.00 0.00 68788.67 11676.28 77383.99 00:37:03.033 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x80000 length 0x80000 00:37:03.033 nvme2n1 : 5.06 1821.14 7.11 0.00 0.00 69738.79 7726.95 59984.04 00:37:03.033 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0x80000 00:37:03.033 nvme2n2 : 5.06 1845.47 7.21 0.00 0.00 68829.26 12191.41 77383.99 00:37:03.033 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x80000 length 0x80000 00:37:03.033 nvme2n2 : 5.06 1822.31 7.12 0.00 0.00 69556.61 12248.65 64105.08 00:37:03.033 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0x80000 00:37:03.033 nvme2n3 : 5.06 1844.94 7.21 0.00 0.00 68717.39 13107.20 70057.70 00:37:03.033 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x80000 length 0x80000 00:37:03.033 nvme2n3 : 5.07 1819.43 7.11 0.00 0.00 69559.03 6582.22 54489.32 00:37:03.033 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x0 length 0x20000 00:37:03.033 nvme3n1 : 5.07 1867.72 7.30 0.00 0.00 67768.99 5294.39 65020.87 00:37:03.033 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:37:03.033 Verification LBA range: start 0x20000 length 0x20000 00:37:03.033 nvme3n1 : 5.07 1818.63 7.10 0.00 0.00 69512.39 5237.16 64105.08 00:37:03.033 [2024-11-20T05:48:22.952Z] =================================================================================================================== 00:37:03.033 [2024-11-20T05:48:22.952Z] Total : 23368.10 91.28 0.00 0.00 65243.76 4636.17 77383.99 00:37:04.410 ************************************ 00:37:04.410 END TEST bdev_verify 00:37:04.410 ************************************ 00:37:04.410 00:37:04.410 real 0m7.400s 00:37:04.410 user 0m11.659s 00:37:04.410 sys 0m1.901s 00:37:04.410 05:48:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:04.410 05:48:24 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:37:04.410 05:48:24 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:04.410 05:48:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:37:04.410 05:48:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:04.410 05:48:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:04.410 ************************************ 00:37:04.410 START TEST bdev_verify_big_io 00:37:04.410 ************************************ 00:37:04.410 05:48:24 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:37:04.669 [2024-11-20 05:48:24.375303] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:04.669 [2024-11-20 05:48:24.375471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72776 ] 00:37:04.669 [2024-11-20 05:48:24.567882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:04.928 [2024-11-20 05:48:24.736742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.928 [2024-11-20 05:48:24.736787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.863 Running I/O for 5 seconds... 00:37:10.928 1272.00 IOPS, 79.50 MiB/s [2024-11-20T05:48:31.412Z] 2466.00 IOPS, 154.12 MiB/s [2024-11-20T05:48:31.412Z] 2983.67 IOPS, 186.48 MiB/s 00:37:11.493 Latency(us) 00:37:11.493 [2024-11-20T05:48:31.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.493 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.493 Verification LBA range: start 0x0 length 0xa000 00:37:11.493 nvme0n1 : 5.88 149.71 9.36 0.00 0.00 839784.90 6381.89 1172207.23 00:37:11.493 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.493 Verification LBA range: start 0xa000 length 0xa000 00:37:11.493 nvme0n1 : 5.88 118.27 7.39 0.00 0.00 1037356.62 70973.48 1252796.48 00:37:11.493 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.493 Verification LBA range: start 0x0 length 0xbd0b 00:37:11.493 nvme1n1 : 5.88 116.98 7.31 0.00 0.00 1050080.63 25069.67 1413974.97 00:37:11.493 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.493 Verification LBA range: start 0xbd0b length 0xbd0b 00:37:11.493 nvme1n1 : 5.86 152.95 9.56 0.00 0.00 804729.56 9329.58 1120923.17 00:37:11.493 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.493 Verification LBA range: start 0x0 length 0x8000 00:37:11.493 nvme2n1 : 5.89 116.85 7.30 0.00 0.00 1025023.77 46018.29 1179533.53 00:37:11.494 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x8000 length 0x8000 00:37:11.494 nvme2n1 : 5.88 117.08 7.32 0.00 0.00 1026392.79 46705.13 2139278.20 00:37:11.494 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x0 length 0x8000 00:37:11.494 nvme2n2 : 5.87 128.10 8.01 0.00 0.00 913780.42 28160.45 1487237.92 00:37:11.494 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x8000 length 0x8000 00:37:11.494 nvme2n2 : 5.86 92.09 5.76 0.00 0.00 1276422.51 27817.03 3458011.33 00:37:11.494 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x0 length 0x8000 00:37:11.494 nvme2n3 : 5.89 138.53 8.66 0.00 0.00 821390.43 37547.26 1582479.76 00:37:11.494 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x8000 length 0x8000 00:37:11.494 nvme2n3 : 5.88 146.93 9.18 0.00 0.00 779116.83 37318.32 926776.34 00:37:11.494 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x0 length 0x2000 00:37:11.494 nvme3n1 : 5.87 128.52 8.03 0.00 0.00 859419.95 14137.46 893808.01 00:37:11.494 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:37:11.494 Verification LBA range: start 0x2000 length 0x2000 00:37:11.494 nvme3n1 : 5.89 173.76 10.86 0.00 0.00 639898.20 6152.94 996376.15 00:37:11.494 [2024-11-20T05:48:31.413Z] =================================================================================================================== 00:37:11.494 [2024-11-20T05:48:31.413Z] Total : 1579.77 98.74 0.00 0.00 898012.17 6152.94 3458011.33 00:37:13.393 00:37:13.393 real 0m8.971s 00:37:13.393 user 0m16.071s 00:37:13.393 sys 0m0.733s 00:37:13.393 05:48:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:13.393 05:48:33 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:37:13.393 ************************************ 00:37:13.393 END TEST bdev_verify_big_io 00:37:13.393 ************************************ 00:37:13.393 05:48:33 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:13.393 05:48:33 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:37:13.393 05:48:33 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:13.393 05:48:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:13.731 ************************************ 00:37:13.731 START TEST bdev_write_zeroes 00:37:13.731 ************************************ 00:37:13.731 05:48:33 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:13.731 [2024-11-20 05:48:33.417867] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:13.731 [2024-11-20 05:48:33.418051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72897 ] 00:37:13.731 [2024-11-20 05:48:33.608056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.990 [2024-11-20 05:48:33.773458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.556 Running I/O for 1 seconds... 00:37:15.493 56768.00 IOPS, 221.75 MiB/s 00:37:15.493 Latency(us) 00:37:15.493 [2024-11-20T05:48:35.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.493 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme0n1 : 1.02 8938.66 34.92 0.00 0.00 14303.63 7612.48 27130.19 00:37:15.493 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme1n1 : 1.02 11881.26 46.41 0.00 0.00 10750.46 5466.10 22665.73 00:37:15.493 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme2n1 : 1.03 8987.53 35.11 0.00 0.00 14125.77 5466.10 25756.51 00:37:15.493 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme2n2 : 1.02 8906.95 34.79 0.00 0.00 14242.65 7440.77 25756.51 00:37:15.493 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme2n3 : 1.02 8897.22 34.75 0.00 0.00 14245.46 7440.77 25756.51 00:37:15.493 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:37:15.493 nvme3n1 : 1.02 8887.15 34.72 0.00 0.00 14249.99 7440.77 25756.51 00:37:15.493 [2024-11-20T05:48:35.412Z] =================================================================================================================== 00:37:15.493 [2024-11-20T05:48:35.412Z] Total : 56498.76 220.70 0.00 0.00 13502.63 5466.10 27130.19 00:37:17.433 00:37:17.433 real 0m3.592s 00:37:17.433 user 0m2.749s 00:37:17.433 sys 0m0.680s 00:37:17.433 05:48:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.433 05:48:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:37:17.433 ************************************ 00:37:17.433 END TEST bdev_write_zeroes 00:37:17.433 ************************************ 00:37:17.433 05:48:36 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:17.433 05:48:36 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:37:17.433 05:48:36 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:17.433 05:48:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:17.433 ************************************ 00:37:17.433 START TEST bdev_json_nonenclosed 00:37:17.433 ************************************ 00:37:17.433 05:48:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:17.433 [2024-11-20 05:48:37.081356] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:17.433 [2024-11-20 05:48:37.081526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72960 ] 00:37:17.433 [2024-11-20 05:48:37.270668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.691 [2024-11-20 05:48:37.435492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.691 [2024-11-20 05:48:37.435617] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:37:17.691 [2024-11-20 05:48:37.435642] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:17.691 [2024-11-20 05:48:37.435654] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:17.948 00:37:17.948 real 0m0.786s 00:37:17.948 user 0m0.513s 00:37:17.948 sys 0m0.166s 00:37:17.948 05:48:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:17.948 05:48:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:37:17.948 ************************************ 00:37:17.948 END TEST bdev_json_nonenclosed 00:37:17.948 ************************************ 00:37:17.948 05:48:37 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:17.948 05:48:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:37:17.948 05:48:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:17.948 05:48:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:17.948 ************************************ 00:37:17.948 START TEST bdev_json_nonarray 00:37:17.948 ************************************ 00:37:17.948 05:48:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:37:18.206 [2024-11-20 05:48:37.935032] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:18.206 [2024-11-20 05:48:37.935181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72987 ] 00:37:18.206 [2024-11-20 05:48:38.123949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:18.464 [2024-11-20 05:48:38.289122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:18.464 [2024-11-20 05:48:38.289271] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:37:18.464 [2024-11-20 05:48:38.289294] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:18.464 [2024-11-20 05:48:38.289307] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:18.722 00:37:18.722 real 0m0.786s 00:37:18.722 user 0m0.527s 00:37:18.722 sys 0m0.152s 00:37:18.722 05:48:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:18.722 05:48:38 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:37:18.722 ************************************ 00:37:18.722 END TEST bdev_json_nonarray 00:37:18.722 ************************************ 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:37:18.981 05:48:38 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:19.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:34.421 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:37:34.421 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:46.620 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:46.620 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:37:46.620 00:37:46.620 real 1m32.508s 00:37:46.620 user 1m47.862s 00:37:46.620 sys 1m47.008s 00:37:46.620 05:49:05 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:46.620 05:49:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:37:46.620 ************************************ 00:37:46.620 END TEST blockdev_xnvme 00:37:46.620 ************************************ 00:37:46.620 05:49:06 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:37:46.620 05:49:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:46.620 05:49:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:46.620 05:49:06 -- common/autotest_common.sh@10 -- # set +x 00:37:46.620 ************************************ 00:37:46.620 START TEST ublk 00:37:46.620 ************************************ 00:37:46.620 05:49:06 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:37:46.620 * Looking for test storage... 00:37:46.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:37:46.620 05:49:06 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:46.620 05:49:06 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:37:46.620 05:49:06 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:46.620 05:49:06 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:46.620 05:49:06 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:46.620 05:49:06 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:46.620 05:49:06 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:46.620 05:49:06 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:37:46.620 05:49:06 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:37:46.620 05:49:06 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:37:46.620 05:49:06 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:37:46.620 05:49:06 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:37:46.620 05:49:06 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:37:46.620 05:49:06 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:37:46.620 05:49:06 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:46.620 05:49:06 ublk -- scripts/common.sh@344 -- # case "$op" in 00:37:46.620 05:49:06 ublk -- scripts/common.sh@345 -- # : 1 00:37:46.621 05:49:06 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:46.621 05:49:06 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:46.621 05:49:06 ublk -- scripts/common.sh@365 -- # decimal 1 00:37:46.621 05:49:06 ublk -- scripts/common.sh@353 -- # local d=1 00:37:46.621 05:49:06 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:46.621 05:49:06 ublk -- scripts/common.sh@355 -- # echo 1 00:37:46.621 05:49:06 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:37:46.621 05:49:06 ublk -- scripts/common.sh@366 -- # decimal 2 00:37:46.621 05:49:06 ublk -- scripts/common.sh@353 -- # local d=2 00:37:46.621 05:49:06 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:46.621 05:49:06 ublk -- scripts/common.sh@355 -- # echo 2 00:37:46.621 05:49:06 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:37:46.621 05:49:06 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:46.621 05:49:06 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:46.621 05:49:06 ublk -- scripts/common.sh@368 -- # return 0 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:46.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.621 --rc genhtml_branch_coverage=1 00:37:46.621 --rc genhtml_function_coverage=1 00:37:46.621 --rc genhtml_legend=1 00:37:46.621 --rc geninfo_all_blocks=1 00:37:46.621 --rc geninfo_unexecuted_blocks=1 00:37:46.621 00:37:46.621 ' 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:46.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.621 --rc genhtml_branch_coverage=1 00:37:46.621 --rc genhtml_function_coverage=1 00:37:46.621 --rc genhtml_legend=1 00:37:46.621 --rc geninfo_all_blocks=1 00:37:46.621 --rc geninfo_unexecuted_blocks=1 00:37:46.621 00:37:46.621 ' 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:46.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.621 --rc genhtml_branch_coverage=1 00:37:46.621 --rc genhtml_function_coverage=1 00:37:46.621 --rc genhtml_legend=1 00:37:46.621 --rc geninfo_all_blocks=1 00:37:46.621 --rc geninfo_unexecuted_blocks=1 00:37:46.621 00:37:46.621 ' 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:46.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:46.621 --rc genhtml_branch_coverage=1 00:37:46.621 --rc genhtml_function_coverage=1 00:37:46.621 --rc genhtml_legend=1 00:37:46.621 --rc geninfo_all_blocks=1 00:37:46.621 --rc geninfo_unexecuted_blocks=1 00:37:46.621 00:37:46.621 ' 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:37:46.621 05:49:06 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:37:46.621 05:49:06 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:37:46.621 05:49:06 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:37:46.621 05:49:06 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:37:46.621 05:49:06 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:37:46.621 05:49:06 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:37:46.621 05:49:06 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:37:46.621 05:49:06 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:37:46.621 05:49:06 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:46.621 05:49:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:46.621 ************************************ 00:37:46.621 START TEST test_save_ublk_config 00:37:46.621 ************************************ 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73421 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73421 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 73421 ']' 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:46.621 05:49:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:46.621 [2024-11-20 05:49:06.461111] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:46.621 [2024-11-20 05:49:06.461389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73421 ] 00:37:46.879 [2024-11-20 05:49:06.645956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:47.136 [2024-11-20 05:49:06.801338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.073 05:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:48.332 05:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:37:48.332 05:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:37:48.332 05:49:07 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:37:48.332 05:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.332 05:49:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:48.332 [2024-11-20 05:49:07.999858] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:48.332 [2024-11-20 05:49:08.001287] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:48.332 malloc0 00:37:48.332 [2024-11-20 05:49:08.112030] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:37:48.332 [2024-11-20 05:49:08.112162] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:37:48.332 [2024-11-20 05:49:08.112178] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:37:48.332 [2024-11-20 05:49:08.112186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:37:48.332 [2024-11-20 05:49:08.119874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:48.332 [2024-11-20 05:49:08.119919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:48.332 [2024-11-20 05:49:08.127847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:48.332 [2024-11-20 05:49:08.127981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:37:48.332 [2024-11-20 05:49:08.145957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:37:48.332 0 00:37:48.332 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.332 05:49:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:37:48.332 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:48.332 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:48.604 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:48.604 05:49:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:37:48.604 "subsystems": [ 00:37:48.604 { 00:37:48.604 "subsystem": "fsdev", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "fsdev_set_opts", 00:37:48.604 "params": { 00:37:48.604 "fsdev_io_pool_size": 65535, 00:37:48.604 "fsdev_io_cache_size": 256 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "keyring", 00:37:48.604 "config": [] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "iobuf", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "iobuf_set_options", 00:37:48.604 "params": { 00:37:48.604 "small_pool_count": 8192, 00:37:48.604 "large_pool_count": 1024, 00:37:48.604 "small_bufsize": 8192, 00:37:48.604 "large_bufsize": 135168, 00:37:48.604 "enable_numa": false 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "sock", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "sock_set_default_impl", 00:37:48.604 "params": { 00:37:48.604 "impl_name": "posix" 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "sock_impl_set_options", 00:37:48.604 "params": { 00:37:48.604 "impl_name": "ssl", 00:37:48.604 "recv_buf_size": 4096, 00:37:48.604 "send_buf_size": 4096, 00:37:48.604 "enable_recv_pipe": true, 00:37:48.604 "enable_quickack": false, 00:37:48.604 "enable_placement_id": 0, 00:37:48.604 "enable_zerocopy_send_server": true, 00:37:48.604 "enable_zerocopy_send_client": false, 00:37:48.604 "zerocopy_threshold": 0, 00:37:48.604 "tls_version": 0, 00:37:48.604 "enable_ktls": false 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "sock_impl_set_options", 00:37:48.604 "params": { 00:37:48.604 "impl_name": "posix", 00:37:48.604 "recv_buf_size": 2097152, 00:37:48.604 "send_buf_size": 2097152, 00:37:48.604 "enable_recv_pipe": true, 00:37:48.604 "enable_quickack": false, 00:37:48.604 "enable_placement_id": 0, 00:37:48.604 "enable_zerocopy_send_server": true, 00:37:48.604 "enable_zerocopy_send_client": false, 00:37:48.604 "zerocopy_threshold": 0, 00:37:48.604 "tls_version": 0, 00:37:48.604 "enable_ktls": false 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "vmd", 00:37:48.604 "config": [] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "accel", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "accel_set_options", 00:37:48.604 "params": { 00:37:48.604 "small_cache_size": 128, 00:37:48.604 "large_cache_size": 16, 00:37:48.604 "task_count": 2048, 00:37:48.604 "sequence_count": 2048, 00:37:48.604 "buf_count": 2048 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "bdev", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "bdev_set_options", 00:37:48.604 "params": { 00:37:48.604 "bdev_io_pool_size": 65535, 00:37:48.604 "bdev_io_cache_size": 256, 00:37:48.604 "bdev_auto_examine": true, 00:37:48.604 "iobuf_small_cache_size": 128, 00:37:48.604 "iobuf_large_cache_size": 16 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_raid_set_options", 00:37:48.604 "params": { 00:37:48.604 "process_window_size_kb": 1024, 00:37:48.604 "process_max_bandwidth_mb_sec": 0 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_iscsi_set_options", 00:37:48.604 "params": { 00:37:48.604 "timeout_sec": 30 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_nvme_set_options", 00:37:48.604 "params": { 00:37:48.604 "action_on_timeout": "none", 00:37:48.604 "timeout_us": 0, 00:37:48.604 "timeout_admin_us": 0, 00:37:48.604 "keep_alive_timeout_ms": 10000, 00:37:48.604 "arbitration_burst": 0, 00:37:48.604 "low_priority_weight": 0, 00:37:48.604 "medium_priority_weight": 0, 00:37:48.604 "high_priority_weight": 0, 00:37:48.604 "nvme_adminq_poll_period_us": 10000, 00:37:48.604 "nvme_ioq_poll_period_us": 0, 00:37:48.604 "io_queue_requests": 0, 00:37:48.604 "delay_cmd_submit": true, 00:37:48.604 "transport_retry_count": 4, 00:37:48.604 "bdev_retry_count": 3, 00:37:48.604 "transport_ack_timeout": 0, 00:37:48.604 "ctrlr_loss_timeout_sec": 0, 00:37:48.604 "reconnect_delay_sec": 0, 00:37:48.604 "fast_io_fail_timeout_sec": 0, 00:37:48.604 "disable_auto_failback": false, 00:37:48.604 "generate_uuids": false, 00:37:48.604 "transport_tos": 0, 00:37:48.604 "nvme_error_stat": false, 00:37:48.604 "rdma_srq_size": 0, 00:37:48.604 "io_path_stat": false, 00:37:48.604 "allow_accel_sequence": false, 00:37:48.604 "rdma_max_cq_size": 0, 00:37:48.604 "rdma_cm_event_timeout_ms": 0, 00:37:48.604 "dhchap_digests": [ 00:37:48.604 "sha256", 00:37:48.604 "sha384", 00:37:48.604 "sha512" 00:37:48.604 ], 00:37:48.604 "dhchap_dhgroups": [ 00:37:48.604 "null", 00:37:48.604 "ffdhe2048", 00:37:48.604 "ffdhe3072", 00:37:48.604 "ffdhe4096", 00:37:48.604 "ffdhe6144", 00:37:48.604 "ffdhe8192" 00:37:48.604 ] 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_nvme_set_hotplug", 00:37:48.604 "params": { 00:37:48.604 "period_us": 100000, 00:37:48.604 "enable": false 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_malloc_create", 00:37:48.604 "params": { 00:37:48.604 "name": "malloc0", 00:37:48.604 "num_blocks": 8192, 00:37:48.604 "block_size": 4096, 00:37:48.604 "physical_block_size": 4096, 00:37:48.604 "uuid": "6db24209-4e29-430f-993c-1dacb9dac6a3", 00:37:48.604 "optimal_io_boundary": 0, 00:37:48.604 "md_size": 0, 00:37:48.604 "dif_type": 0, 00:37:48.604 "dif_is_head_of_md": false, 00:37:48.604 "dif_pi_format": 0 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "bdev_wait_for_examine" 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "scsi", 00:37:48.604 "config": null 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "scheduler", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "framework_set_scheduler", 00:37:48.604 "params": { 00:37:48.604 "name": "static" 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "vhost_scsi", 00:37:48.604 "config": [] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "vhost_blk", 00:37:48.604 "config": [] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "ublk", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "ublk_create_target", 00:37:48.604 "params": { 00:37:48.604 "cpumask": "1" 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "ublk_start_disk", 00:37:48.604 "params": { 00:37:48.604 "bdev_name": "malloc0", 00:37:48.604 "ublk_id": 0, 00:37:48.604 "num_queues": 1, 00:37:48.604 "queue_depth": 128 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "nbd", 00:37:48.604 "config": [] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "nvmf", 00:37:48.604 "config": [ 00:37:48.604 { 00:37:48.604 "method": "nvmf_set_config", 00:37:48.604 "params": { 00:37:48.604 "discovery_filter": "match_any", 00:37:48.604 "admin_cmd_passthru": { 00:37:48.604 "identify_ctrlr": false 00:37:48.604 }, 00:37:48.604 "dhchap_digests": [ 00:37:48.604 "sha256", 00:37:48.604 "sha384", 00:37:48.604 "sha512" 00:37:48.604 ], 00:37:48.604 "dhchap_dhgroups": [ 00:37:48.604 "null", 00:37:48.604 "ffdhe2048", 00:37:48.604 "ffdhe3072", 00:37:48.604 "ffdhe4096", 00:37:48.604 "ffdhe6144", 00:37:48.604 "ffdhe8192" 00:37:48.604 ] 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "nvmf_set_max_subsystems", 00:37:48.604 "params": { 00:37:48.604 "max_subsystems": 1024 00:37:48.604 } 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "method": "nvmf_set_crdt", 00:37:48.604 "params": { 00:37:48.604 "crdt1": 0, 00:37:48.604 "crdt2": 0, 00:37:48.604 "crdt3": 0 00:37:48.604 } 00:37:48.604 } 00:37:48.604 ] 00:37:48.604 }, 00:37:48.604 { 00:37:48.604 "subsystem": "iscsi", 00:37:48.605 "config": [ 00:37:48.605 { 00:37:48.605 "method": "iscsi_set_options", 00:37:48.605 "params": { 00:37:48.605 "node_base": "iqn.2016-06.io.spdk", 00:37:48.605 "max_sessions": 128, 00:37:48.605 "max_connections_per_session": 2, 00:37:48.605 "max_queue_depth": 64, 00:37:48.605 "default_time2wait": 2, 00:37:48.605 "default_time2retain": 20, 00:37:48.605 "first_burst_length": 8192, 00:37:48.605 "immediate_data": true, 00:37:48.605 "allow_duplicated_isid": false, 00:37:48.605 "error_recovery_level": 0, 00:37:48.605 "nop_timeout": 60, 00:37:48.605 "nop_in_interval": 30, 00:37:48.605 "disable_chap": false, 00:37:48.605 "require_chap": false, 00:37:48.605 "mutual_chap": false, 00:37:48.605 "chap_group": 0, 00:37:48.605 "max_large_datain_per_connection": 64, 00:37:48.605 "max_r2t_per_connection": 4, 00:37:48.605 "pdu_pool_size": 36864, 00:37:48.605 "immediate_data_pool_size": 16384, 00:37:48.605 "data_out_pool_size": 2048 00:37:48.605 } 00:37:48.605 } 00:37:48.605 ] 00:37:48.605 } 00:37:48.605 ] 00:37:48.605 }' 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73421 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 73421 ']' 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 73421 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73421 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:48.605 killing process with pid 73421 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73421' 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 73421 00:37:48.605 05:49:08 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 73421 00:37:50.508 [2024-11-20 05:49:10.265424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:50.508 [2024-11-20 05:49:10.302878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:50.508 [2024-11-20 05:49:10.303056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:50.508 [2024-11-20 05:49:10.316838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:50.508 [2024-11-20 05:49:10.316932] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:50.508 [2024-11-20 05:49:10.316952] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:50.508 [2024-11-20 05:49:10.316983] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:50.508 [2024-11-20 05:49:10.317179] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73498 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73498 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 73498 ']' 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:53.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:53.041 05:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:37:53.041 "subsystems": [ 00:37:53.041 { 00:37:53.041 "subsystem": "fsdev", 00:37:53.041 "config": [ 00:37:53.041 { 00:37:53.041 "method": "fsdev_set_opts", 00:37:53.041 "params": { 00:37:53.041 "fsdev_io_pool_size": 65535, 00:37:53.041 "fsdev_io_cache_size": 256 00:37:53.041 } 00:37:53.041 } 00:37:53.041 ] 00:37:53.041 }, 00:37:53.041 { 00:37:53.041 "subsystem": "keyring", 00:37:53.041 "config": [] 00:37:53.041 }, 00:37:53.041 { 00:37:53.041 "subsystem": "iobuf", 00:37:53.041 "config": [ 00:37:53.041 { 00:37:53.041 "method": "iobuf_set_options", 00:37:53.041 "params": { 00:37:53.041 "small_pool_count": 8192, 00:37:53.041 "large_pool_count": 1024, 00:37:53.041 "small_bufsize": 8192, 00:37:53.041 "large_bufsize": 135168, 00:37:53.041 "enable_numa": false 00:37:53.041 } 00:37:53.041 } 00:37:53.041 ] 00:37:53.041 }, 00:37:53.041 { 00:37:53.041 "subsystem": "sock", 00:37:53.041 "config": [ 00:37:53.041 { 00:37:53.041 "method": "sock_set_default_impl", 00:37:53.041 "params": { 00:37:53.041 "impl_name": "posix" 00:37:53.041 } 00:37:53.041 }, 00:37:53.041 { 00:37:53.041 "method": "sock_impl_set_options", 00:37:53.041 "params": { 00:37:53.041 "impl_name": "ssl", 00:37:53.041 "recv_buf_size": 4096, 00:37:53.041 "send_buf_size": 4096, 00:37:53.041 "enable_recv_pipe": true, 00:37:53.041 "enable_quickack": false, 00:37:53.041 "enable_placement_id": 0, 00:37:53.041 "enable_zerocopy_send_server": true, 00:37:53.041 "enable_zerocopy_send_client": false, 00:37:53.041 "zerocopy_threshold": 0, 00:37:53.041 "tls_version": 0, 00:37:53.041 "enable_ktls": false 00:37:53.041 } 00:37:53.041 }, 00:37:53.041 { 00:37:53.041 "method": "sock_impl_set_options", 00:37:53.041 "params": { 00:37:53.041 "impl_name": "posix", 00:37:53.041 "recv_buf_size": 2097152, 00:37:53.042 "send_buf_size": 2097152, 00:37:53.042 "enable_recv_pipe": true, 00:37:53.042 "enable_quickack": false, 00:37:53.042 "enable_placement_id": 0, 00:37:53.042 "enable_zerocopy_send_server": true, 00:37:53.042 "enable_zerocopy_send_client": false, 00:37:53.042 "zerocopy_threshold": 0, 00:37:53.042 "tls_version": 0, 00:37:53.042 "enable_ktls": false 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "vmd", 00:37:53.042 "config": [] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "accel", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "accel_set_options", 00:37:53.042 "params": { 00:37:53.042 "small_cache_size": 128, 00:37:53.042 "large_cache_size": 16, 00:37:53.042 "task_count": 2048, 00:37:53.042 "sequence_count": 2048, 00:37:53.042 "buf_count": 2048 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "bdev", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "bdev_set_options", 00:37:53.042 "params": { 00:37:53.042 "bdev_io_pool_size": 65535, 00:37:53.042 "bdev_io_cache_size": 256, 00:37:53.042 "bdev_auto_examine": true, 00:37:53.042 "iobuf_small_cache_size": 128, 00:37:53.042 "iobuf_large_cache_size": 16 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_raid_set_options", 00:37:53.042 "params": { 00:37:53.042 "process_window_size_kb": 1024, 00:37:53.042 "process_max_bandwidth_mb_sec": 0 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_iscsi_set_options", 00:37:53.042 "params": { 00:37:53.042 "timeout_sec": 30 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_nvme_set_options", 00:37:53.042 "params": { 00:37:53.042 "action_on_timeout": "none", 00:37:53.042 "timeout_us": 0, 00:37:53.042 "timeout_admin_us": 0, 00:37:53.042 "keep_alive_timeout_ms": 10000, 00:37:53.042 "arbitration_burst": 0, 00:37:53.042 "low_priority_weight": 0, 00:37:53.042 "medium_priority_weight": 0, 00:37:53.042 "high_priority_weight": 0, 00:37:53.042 "nvme_adminq_poll_period_us": 10000, 00:37:53.042 "nvme_ioq_poll_period_us": 0, 00:37:53.042 "io_queue_requests": 0, 00:37:53.042 "delay_cmd_submit": true, 00:37:53.042 "transport_retry_count": 4, 00:37:53.042 "bdev_retry_count": 3, 00:37:53.042 "transport_ack_timeout": 0, 00:37:53.042 "ctrlr_loss_timeout_sec": 0, 00:37:53.042 "reconnect_delay_sec": 0, 00:37:53.042 "fast_io_fail_timeout_sec": 0, 00:37:53.042 "disable_auto_failback": false, 00:37:53.042 "generate_uuids": false, 00:37:53.042 "transport_tos": 0, 00:37:53.042 "nvme_error_stat": false, 00:37:53.042 "rdma_srq_size": 0, 00:37:53.042 "io_path_stat": false, 00:37:53.042 "allow_accel_sequence": false, 00:37:53.042 "rdma_max_cq_size": 0, 00:37:53.042 "rdma_cm_event_timeout_ms": 0, 00:37:53.042 "dhchap_digests": [ 00:37:53.042 "sha256", 00:37:53.042 "sha384", 00:37:53.042 "sha512" 00:37:53.042 ], 00:37:53.042 "dhchap_dhgroups": [ 00:37:53.042 "null", 00:37:53.042 "ffdhe2048", 00:37:53.042 "ffdhe3072", 00:37:53.042 "ffdhe4096", 00:37:53.042 "ffdhe6144", 00:37:53.042 "ffdhe8192" 00:37:53.042 ] 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_nvme_set_hotplug", 00:37:53.042 "params": { 00:37:53.042 "period_us": 100000, 00:37:53.042 "enable": false 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_malloc_create", 00:37:53.042 "params": { 00:37:53.042 "name": "malloc0", 00:37:53.042 "num_blocks": 8192, 00:37:53.042 "block_size": 4096, 00:37:53.042 "physical_block_size": 4096, 00:37:53.042 "uuid": "6db24209-4e29-430f-993c-1dacb9dac6a3", 00:37:53.042 "optimal_io_boundary": 0, 00:37:53.042 "md_size": 0, 00:37:53.042 "dif_type": 0, 00:37:53.042 "dif_is_head_of_md": false, 00:37:53.042 "dif_pi_format": 0 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "bdev_wait_for_examine" 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "scsi", 00:37:53.042 "config": null 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "scheduler", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "framework_set_scheduler", 00:37:53.042 "params": { 00:37:53.042 "name": "static" 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "vhost_scsi", 00:37:53.042 "config": [] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "vhost_blk", 00:37:53.042 "config": [] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "ublk", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "ublk_create_target", 00:37:53.042 "params": { 00:37:53.042 "cpumask": "1" 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "ublk_start_disk", 00:37:53.042 "params": { 00:37:53.042 "bdev_name": "malloc0", 00:37:53.042 "ublk_id": 0, 00:37:53.042 "num_queues": 1, 00:37:53.042 "queue_depth": 128 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "nbd", 00:37:53.042 "config": [] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "nvmf", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "nvmf_set_config", 00:37:53.042 "params": { 00:37:53.042 "discovery_filter": "match_any", 00:37:53.042 "admin_cmd_passthru": { 00:37:53.042 "identify_ctrlr": false 00:37:53.042 }, 00:37:53.042 "dhchap_digests": [ 00:37:53.042 "sha256", 00:37:53.042 "sha384", 00:37:53.042 "sha512" 00:37:53.042 ], 00:37:53.042 "dhchap_dhgroups": [ 00:37:53.042 "null", 00:37:53.042 "ffdhe2048", 00:37:53.042 "ffdhe3072", 00:37:53.042 "ffdhe4096", 00:37:53.042 "ffdhe6144", 00:37:53.042 "ffdhe8192" 00:37:53.042 ] 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "nvmf_set_max_subsystems", 00:37:53.042 "params": { 00:37:53.042 "max_subsystems": 1024 00:37:53.042 } 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "method": "nvmf_set_crdt", 00:37:53.042 "params": { 00:37:53.042 "crdt1": 0, 00:37:53.042 "crdt2": 0, 00:37:53.042 "crdt3": 0 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }, 00:37:53.042 { 00:37:53.042 "subsystem": "iscsi", 00:37:53.042 "config": [ 00:37:53.042 { 00:37:53.042 "method": "iscsi_set_options", 00:37:53.042 "params": { 00:37:53.042 "node_base": "iqn.2016-06.io.spdk", 00:37:53.042 "max_sessions": 128, 00:37:53.042 "max_connections_per_session": 2, 00:37:53.042 "max_queue_depth": 64, 00:37:53.042 "default_time2wait": 2, 00:37:53.042 "default_time2retain": 20, 00:37:53.042 "first_burst_length": 8192, 00:37:53.042 "immediate_data": true, 00:37:53.042 "allow_duplicated_isid": false, 00:37:53.042 "error_recovery_level": 0, 00:37:53.042 "nop_timeout": 60, 00:37:53.042 "nop_in_interval": 30, 00:37:53.042 "disable_chap": false, 00:37:53.042 "require_chap": false, 00:37:53.042 "mutual_chap": false, 00:37:53.042 "chap_group": 0, 00:37:53.042 "max_large_datain_per_connection": 64, 00:37:53.042 "max_r2t_per_connection": 4, 00:37:53.042 "pdu_pool_size": 36864, 00:37:53.042 "immediate_data_pool_size": 16384, 00:37:53.042 "data_out_pool_size": 2048 00:37:53.042 } 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 } 00:37:53.042 ] 00:37:53.042 }' 00:37:53.042 05:49:12 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:37:53.042 [2024-11-20 05:49:12.749063] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:53.042 [2024-11-20 05:49:12.749235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73498 ] 00:37:53.042 [2024-11-20 05:49:12.943017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.301 [2024-11-20 05:49:13.105468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.679 [2024-11-20 05:49:14.479833] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:37:54.679 [2024-11-20 05:49:14.481144] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:37:54.679 [2024-11-20 05:49:14.487057] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:37:54.679 [2024-11-20 05:49:14.487201] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:37:54.679 [2024-11-20 05:49:14.487216] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:37:54.679 [2024-11-20 05:49:14.487225] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:37:54.679 [2024-11-20 05:49:14.494871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:37:54.679 [2024-11-20 05:49:14.494901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:37:54.679 [2024-11-20 05:49:14.502852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:37:54.679 [2024-11-20 05:49:14.502987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:37:54.679 [2024-11-20 05:49:14.519878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:37:54.679 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73498 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 73498 ']' 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 73498 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73498 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:54.938 killing process with pid 73498 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73498' 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 73498 00:37:54.938 05:49:14 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 73498 00:37:57.471 [2024-11-20 05:49:16.946130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:37:57.471 [2024-11-20 05:49:16.985939] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:57.471 [2024-11-20 05:49:16.986190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:37:57.471 [2024-11-20 05:49:16.993850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:57.471 [2024-11-20 05:49:16.993923] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:37:57.471 [2024-11-20 05:49:16.993933] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:37:57.471 [2024-11-20 05:49:16.993965] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:57.471 [2024-11-20 05:49:16.994161] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:59.397 05:49:19 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:37:59.397 00:37:59.397 real 0m12.954s 00:37:59.397 user 0m9.920s 00:37:59.397 sys 0m3.949s 00:37:59.397 05:49:19 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:59.398 05:49:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:37:59.398 ************************************ 00:37:59.398 END TEST test_save_ublk_config 00:37:59.398 ************************************ 00:37:59.656 05:49:19 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73611 00:37:59.656 05:49:19 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:37:59.656 05:49:19 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:59.656 05:49:19 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73611 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@833 -- # '[' -z 73611 ']' 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:59.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:59.656 05:49:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:37:59.656 [2024-11-20 05:49:19.465798] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:37:59.656 [2024-11-20 05:49:19.466010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73611 ] 00:37:59.915 [2024-11-20 05:49:19.652763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:59.915 [2024-11-20 05:49:19.817789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.915 [2024-11-20 05:49:19.817869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.294 05:49:21 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:01.294 05:49:21 ublk -- common/autotest_common.sh@866 -- # return 0 00:38:01.294 05:49:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:38:01.294 05:49:21 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:01.294 05:49:21 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:01.294 05:49:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:38:01.294 ************************************ 00:38:01.294 START TEST test_create_ublk 00:38:01.294 ************************************ 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:38:01.294 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:01.294 [2024-11-20 05:49:21.027875] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:38:01.294 [2024-11-20 05:49:21.031680] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.294 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:38:01.294 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.294 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:01.553 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.553 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:38:01.553 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:38:01.553 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.553 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:01.553 [2024-11-20 05:49:21.442058] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:38:01.553 [2024-11-20 05:49:21.442585] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:38:01.553 [2024-11-20 05:49:21.442606] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:38:01.553 [2024-11-20 05:49:21.442616] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:38:01.553 [2024-11-20 05:49:21.449914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:01.553 [2024-11-20 05:49:21.449945] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:01.553 [2024-11-20 05:49:21.457855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:01.553 [2024-11-20 05:49:21.458640] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:38:01.811 [2024-11-20 05:49:21.481886] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:38:01.811 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:38:01.811 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:01.811 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:01.811 05:49:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:38:01.811 { 00:38:01.811 "ublk_device": "/dev/ublkb0", 00:38:01.811 "id": 0, 00:38:01.811 "queue_depth": 512, 00:38:01.811 "num_queues": 4, 00:38:01.811 "bdev_name": "Malloc0" 00:38:01.811 } 00:38:01.811 ]' 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:38:01.811 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:38:02.070 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:38:02.070 05:49:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:38:02.070 05:49:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:38:02.070 fio: verification read phase will never start because write phase uses all of runtime 00:38:02.070 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:38:02.070 fio-3.35 00:38:02.070 Starting 1 process 00:38:14.280 00:38:14.280 fio_test: (groupid=0, jobs=1): err= 0: pid=73663: Wed Nov 20 05:49:31 2024 00:38:14.280 write: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(464MiB/10001msec); 0 zone resets 00:38:14.280 clat (usec): min=46, max=11445, avg=83.06, stdev=174.65 00:38:14.280 lat (usec): min=46, max=11479, avg=83.67, stdev=174.72 00:38:14.280 clat percentiles (usec): 00:38:14.280 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 67], 20.00th=[ 69], 00:38:14.280 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:38:14.280 | 70.00th=[ 75], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 92], 00:38:14.280 | 99.00th=[ 110], 99.50th=[ 129], 99.90th=[ 3556], 99.95th=[ 3851], 00:38:14.280 | 99.99th=[ 4146] 00:38:14.280 bw ( KiB/s): min=17900, max=52272, per=99.73%, avg=47411.58, stdev=10082.99, samples=19 00:38:14.280 iops : min= 4475, max=13066, avg=11852.79, stdev=2520.70, samples=19 00:38:14.280 lat (usec) : 50=0.02%, 100=97.87%, 250=1.77%, 500=0.01%, 750=0.01% 00:38:14.280 lat (usec) : 1000=0.02% 00:38:14.280 lat (msec) : 2=0.07%, 4=0.22%, 10=0.03%, 20=0.01% 00:38:14.280 cpu : usr=2.01%, sys=8.63%, ctx=118856, majf=0, minf=798 00:38:14.280 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:14.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.280 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:14.280 issued rwts: total=0,118855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:14.280 latency : target=0, window=0, percentile=100.00%, depth=1 00:38:14.280 00:38:14.280 Run status group 0 (all jobs): 00:38:14.280 WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=464MiB (487MB), run=10001-10001msec 00:38:14.280 00:38:14.280 Disk stats (read/write): 00:38:14.280 ublkb0: ios=0/117580, merge=0/0, ticks=0/8893, in_queue=8894, util=99.10% 00:38:14.280 05:49:32 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 [2024-11-20 05:49:32.015541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:38:14.280 [2024-11-20 05:49:32.053474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:14.280 [2024-11-20 05:49:32.054451] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:38:14.280 [2024-11-20 05:49:32.062885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:14.280 [2024-11-20 05:49:32.063263] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:38:14.280 [2024-11-20 05:49:32.063286] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.280 05:49:32 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.280 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.280 [2024-11-20 05:49:32.082025] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:38:14.280 request: 00:38:14.280 { 00:38:14.280 "ublk_id": 0, 00:38:14.280 "method": "ublk_stop_disk", 00:38:14.280 "req_id": 1 00:38:14.280 } 00:38:14.280 Got JSON-RPC error response 00:38:14.280 response: 00:38:14.280 { 00:38:14.280 "code": -19, 00:38:14.280 "message": "No such device" 00:38:14.281 } 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:14.281 05:49:32 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 [2024-11-20 05:49:32.097062] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:14.281 [2024-11-20 05:49:32.105765] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:14.281 [2024-11-20 05:49:32.105835] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:32 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:38:14.281 05:49:33 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:38:14.281 00:38:14.281 real 0m12.162s 00:38:14.281 user 0m0.615s 00:38:14.281 sys 0m1.006s 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 ************************************ 00:38:14.281 END TEST test_create_ublk 00:38:14.281 ************************************ 00:38:14.281 05:49:33 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:38:14.281 05:49:33 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:14.281 05:49:33 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:14.281 05:49:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 ************************************ 00:38:14.281 START TEST test_create_multi_ublk 00:38:14.281 ************************************ 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 [2024-11-20 05:49:33.243842] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:38:14.281 [2024-11-20 05:49:33.247249] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 [2024-11-20 05:49:33.624041] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:38:14.281 [2024-11-20 05:49:33.624573] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:38:14.281 [2024-11-20 05:49:33.624595] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:38:14.281 [2024-11-20 05:49:33.624610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:38:14.281 [2024-11-20 05:49:33.631870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:14.281 [2024-11-20 05:49:33.631910] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:14.281 [2024-11-20 05:49:33.639882] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:14.281 [2024-11-20 05:49:33.640686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:38:14.281 [2024-11-20 05:49:33.654951] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.281 [2024-11-20 05:49:34.059043] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:38:14.281 [2024-11-20 05:49:34.059559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:38:14.281 [2024-11-20 05:49:34.059581] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:38:14.281 [2024-11-20 05:49:34.059591] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:38:14.281 [2024-11-20 05:49:34.066878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:14.281 [2024-11-20 05:49:34.066911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:14.281 [2024-11-20 05:49:34.074872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:14.281 [2024-11-20 05:49:34.075615] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:38:14.281 [2024-11-20 05:49:34.098868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.281 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.854 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.854 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:38:14.854 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:38:14.854 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.854 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:14.854 [2024-11-20 05:49:34.504023] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:38:14.854 [2024-11-20 05:49:34.504565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:38:14.854 [2024-11-20 05:49:34.504584] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:38:14.854 [2024-11-20 05:49:34.504597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:38:14.854 [2024-11-20 05:49:34.511885] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:14.854 [2024-11-20 05:49:34.511924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:14.854 [2024-11-20 05:49:34.519883] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:14.855 [2024-11-20 05:49:34.520698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:38:14.855 [2024-11-20 05:49:34.529020] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:14.855 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:15.114 [2024-11-20 05:49:34.936072] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:38:15.114 [2024-11-20 05:49:34.936594] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:38:15.114 [2024-11-20 05:49:34.936617] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:38:15.114 [2024-11-20 05:49:34.936627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:38:15.114 [2024-11-20 05:49:34.947848] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:15.114 [2024-11-20 05:49:34.947883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:15.114 [2024-11-20 05:49:34.958841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:15.114 [2024-11-20 05:49:34.959678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:38:15.114 [2024-11-20 05:49:34.970928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.114 05:49:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:15.114 05:49:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.114 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:38:15.114 { 00:38:15.114 "ublk_device": "/dev/ublkb0", 00:38:15.114 "id": 0, 00:38:15.114 "queue_depth": 512, 00:38:15.114 "num_queues": 4, 00:38:15.114 "bdev_name": "Malloc0" 00:38:15.114 }, 00:38:15.114 { 00:38:15.114 "ublk_device": "/dev/ublkb1", 00:38:15.114 "id": 1, 00:38:15.114 "queue_depth": 512, 00:38:15.114 "num_queues": 4, 00:38:15.114 "bdev_name": "Malloc1" 00:38:15.114 }, 00:38:15.114 { 00:38:15.114 "ublk_device": "/dev/ublkb2", 00:38:15.114 "id": 2, 00:38:15.114 "queue_depth": 512, 00:38:15.114 "num_queues": 4, 00:38:15.114 "bdev_name": "Malloc2" 00:38:15.114 }, 00:38:15.114 { 00:38:15.114 "ublk_device": "/dev/ublkb3", 00:38:15.114 "id": 3, 00:38:15.114 "queue_depth": 512, 00:38:15.114 "num_queues": 4, 00:38:15.114 "bdev_name": "Malloc3" 00:38:15.114 } 00:38:15.114 ]' 00:38:15.114 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:38:15.114 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:15.114 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:15.374 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:38:15.633 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:38:15.892 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:38:16.152 05:49:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:16.152 [2024-11-20 05:49:36.008084] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:38:16.152 [2024-11-20 05:49:36.046926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:16.152 [2024-11-20 05:49:36.047992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:38:16.152 [2024-11-20 05:49:36.054850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:16.152 [2024-11-20 05:49:36.055223] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:38:16.152 [2024-11-20 05:49:36.055247] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.152 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:16.412 [2024-11-20 05:49:36.070978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:38:16.412 [2024-11-20 05:49:36.109916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:16.412 [2024-11-20 05:49:36.110948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:38:16.412 [2024-11-20 05:49:36.119909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:16.412 [2024-11-20 05:49:36.120294] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:38:16.412 [2024-11-20 05:49:36.120315] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:16.412 [2024-11-20 05:49:36.135072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:38:16.412 [2024-11-20 05:49:36.169521] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:16.412 [2024-11-20 05:49:36.170675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:38:16.412 [2024-11-20 05:49:36.176866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:16.412 [2024-11-20 05:49:36.177243] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:38:16.412 [2024-11-20 05:49:36.177261] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:16.412 [2024-11-20 05:49:36.193051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:38:16.412 [2024-11-20 05:49:36.225912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:38:16.412 [2024-11-20 05:49:36.226501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:38:16.412 [2024-11-20 05:49:36.236906] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:38:16.412 [2024-11-20 05:49:36.237343] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:38:16.412 [2024-11-20 05:49:36.237364] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:16.412 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:38:16.671 [2024-11-20 05:49:36.470966] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:16.671 [2024-11-20 05:49:36.478857] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:16.671 [2024-11-20 05:49:36.478934] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:38:16.671 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:38:16.671 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:16.671 05:49:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:38:16.671 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:16.671 05:49:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:17.609 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.609 05:49:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:17.609 05:49:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:38:17.609 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.609 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:17.868 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.868 05:49:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:17.868 05:49:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:38:17.868 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:17.868 05:49:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:18.436 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.436 05:49:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:38:18.436 05:49:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:38:18.436 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.436 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:38:18.711 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:38:18.970 05:49:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:38:18.970 00:38:18.970 real 0m5.488s 00:38:18.970 user 0m1.210s 00:38:18.970 sys 0m0.222s 00:38:18.971 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:18.971 05:49:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:38:18.971 ************************************ 00:38:18.971 END TEST test_create_multi_ublk 00:38:18.971 ************************************ 00:38:18.971 05:49:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:18.971 05:49:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:38:18.971 05:49:38 ublk -- ublk/ublk.sh@130 -- # killprocess 73611 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@952 -- # '[' -z 73611 ']' 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@956 -- # kill -0 73611 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@957 -- # uname 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73611 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:18.971 killing process with pid 73611 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73611' 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@971 -- # kill 73611 00:38:18.971 05:49:38 ublk -- common/autotest_common.sh@976 -- # wait 73611 00:38:20.345 [2024-11-20 05:49:40.208557] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:38:20.345 [2024-11-20 05:49:40.208637] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:38:22.252 00:38:22.252 real 0m35.639s 00:38:22.252 user 0m49.782s 00:38:22.252 sys 0m11.720s 00:38:22.252 05:49:41 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:22.252 ************************************ 00:38:22.252 END TEST ublk 00:38:22.252 ************************************ 00:38:22.252 05:49:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:38:22.252 05:49:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:38:22.252 05:49:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:22.252 05:49:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:22.252 05:49:41 -- common/autotest_common.sh@10 -- # set +x 00:38:22.252 ************************************ 00:38:22.252 START TEST ublk_recovery 00:38:22.252 ************************************ 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:38:22.252 * Looking for test storage... 00:38:22.252 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.252 05:49:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.252 --rc genhtml_branch_coverage=1 00:38:22.252 --rc genhtml_function_coverage=1 00:38:22.252 --rc genhtml_legend=1 00:38:22.252 --rc geninfo_all_blocks=1 00:38:22.252 --rc geninfo_unexecuted_blocks=1 00:38:22.252 00:38:22.252 ' 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.252 --rc genhtml_branch_coverage=1 00:38:22.252 --rc genhtml_function_coverage=1 00:38:22.252 --rc genhtml_legend=1 00:38:22.252 --rc geninfo_all_blocks=1 00:38:22.252 --rc geninfo_unexecuted_blocks=1 00:38:22.252 00:38:22.252 ' 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.252 --rc genhtml_branch_coverage=1 00:38:22.252 --rc genhtml_function_coverage=1 00:38:22.252 --rc genhtml_legend=1 00:38:22.252 --rc geninfo_all_blocks=1 00:38:22.252 --rc geninfo_unexecuted_blocks=1 00:38:22.252 00:38:22.252 ' 00:38:22.252 05:49:41 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:22.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.252 --rc genhtml_branch_coverage=1 00:38:22.252 --rc genhtml_function_coverage=1 00:38:22.252 --rc genhtml_legend=1 00:38:22.252 --rc geninfo_all_blocks=1 00:38:22.252 --rc geninfo_unexecuted_blocks=1 00:38:22.252 00:38:22.252 ' 00:38:22.252 05:49:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:38:22.252 05:49:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:38:22.252 05:49:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:38:22.252 05:49:42 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74064 00:38:22.252 05:49:42 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:38:22.252 05:49:42 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:22.252 05:49:42 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74064 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 74064 ']' 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:22.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:22.252 05:49:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:22.252 [2024-11-20 05:49:42.129939] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:22.253 [2024-11-20 05:49:42.130110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:38:22.511 [2024-11-20 05:49:42.319399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:22.770 [2024-11-20 05:49:42.506085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.770 [2024-11-20 05:49:42.506130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:38:24.144 05:49:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:24.144 [2024-11-20 05:49:43.754830] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:38:24.144 [2024-11-20 05:49:43.758601] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.144 05:49:43 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:24.144 malloc0 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.144 05:49:43 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.144 05:49:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:24.144 [2024-11-20 05:49:43.978042] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:38:24.144 [2024-11-20 05:49:43.978204] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:38:24.144 [2024-11-20 05:49:43.978227] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:38:24.144 [2024-11-20 05:49:43.978241] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:38:24.144 [2024-11-20 05:49:43.986987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:38:24.144 [2024-11-20 05:49:43.987010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:38:24.144 [2024-11-20 05:49:43.993851] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:38:24.144 [2024-11-20 05:49:43.994041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:38:24.144 [2024-11-20 05:49:44.010852] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:38:24.144 1 00:38:24.144 05:49:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.144 05:49:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:38:25.521 05:49:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74106 00:38:25.521 05:49:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:38:25.521 05:49:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:38:25.521 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:38:25.521 fio-3.35 00:38:25.521 Starting 1 process 00:38:30.790 05:49:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74064 00:38:30.790 05:49:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:38:36.057 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74064 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:38:36.057 05:49:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74212 00:38:36.057 05:49:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:38:36.057 05:49:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:36.057 05:49:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74212 00:38:36.057 05:49:55 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 74212 ']' 00:38:36.057 05:49:55 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.057 05:49:55 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:36.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.058 05:49:55 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.058 05:49:55 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:36.058 05:49:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:36.058 [2024-11-20 05:49:55.187889] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:38:36.058 [2024-11-20 05:49:55.188115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74212 ] 00:38:36.058 [2024-11-20 05:49:55.381201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:36.058 [2024-11-20 05:49:55.551624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.058 [2024-11-20 05:49:55.551666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:38:36.995 05:49:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:36.995 [2024-11-20 05:49:56.787831] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:38:36.995 [2024-11-20 05:49:56.791468] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:36.995 05:49:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:36.995 05:49:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:37.255 malloc0 00:38:37.255 05:49:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.255 05:49:56 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:38:37.255 05:49:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:37.255 05:49:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:38:37.255 [2024-11-20 05:49:56.992107] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:38:37.255 [2024-11-20 05:49:56.992164] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:38:37.255 [2024-11-20 05:49:56.992199] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:38:37.255 [2024-11-20 05:49:56.999896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:38:37.255 [2024-11-20 05:49:56.999935] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:38:37.255 [2024-11-20 05:49:56.999950] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:38:37.255 [2024-11-20 05:49:57.000081] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:38:37.255 1 00:38:37.255 05:49:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:37.255 05:49:57 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74106 00:38:37.255 [2024-11-20 05:49:57.007846] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:38:37.255 [2024-11-20 05:49:57.014723] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:38:37.255 [2024-11-20 05:49:57.021900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:38:37.255 [2024-11-20 05:49:57.021944] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:39:33.562 00:39:33.562 fio_test: (groupid=0, jobs=1): err= 0: pid=74109: Wed Nov 20 05:50:45 2024 00:39:33.562 read: IOPS=19.8k, BW=77.4MiB/s (81.1MB/s)(4643MiB/60002msec) 00:39:33.562 slat (nsec): min=1156, max=804238, avg=8157.65, stdev=3601.67 00:39:33.562 clat (usec): min=1365, max=7001.6k, avg=3206.19, stdev=53314.71 00:39:33.562 lat (usec): min=1378, max=7001.6k, avg=3214.34, stdev=53314.73 00:39:33.562 clat percentiles (usec): 00:39:33.562 | 1.00th=[ 2147], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2442], 00:39:33.562 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:39:33.562 | 70.00th=[ 2835], 80.00th=[ 2933], 90.00th=[ 3228], 95.00th=[ 3916], 00:39:33.562 | 99.00th=[ 5211], 99.50th=[ 6259], 99.90th=[ 7963], 99.95th=[ 8979], 00:39:33.562 | 99.99th=[13042] 00:39:33.562 bw ( KiB/s): min=19400, max=104152, per=100.00%, avg=88950.12, stdev=11330.95, samples=106 00:39:33.562 iops : min= 4850, max=26038, avg=22237.46, stdev=2832.74, samples=106 00:39:33.562 write: IOPS=19.8k, BW=77.3MiB/s (81.1MB/s)(4640MiB/60002msec); 0 zone resets 00:39:33.562 slat (nsec): min=1279, max=576383, avg=8261.97, stdev=3399.26 00:39:33.562 clat (usec): min=1421, max=7001.5k, avg=3239.00, stdev=49316.18 00:39:33.562 lat (usec): min=1426, max=7001.5k, avg=3247.27, stdev=49316.20 00:39:33.562 clat percentiles (usec): 00:39:33.562 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:39:33.562 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2900], 00:39:33.562 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3294], 95.00th=[ 3949], 00:39:33.562 | 99.00th=[ 5276], 99.50th=[ 6325], 99.90th=[ 8094], 99.95th=[ 8979], 00:39:33.562 | 99.99th=[12256] 00:39:33.562 bw ( KiB/s): min=19488, max=104288, per=100.00%, avg=88896.43, stdev=11333.70, samples=106 00:39:33.562 iops : min= 4872, max=26072, avg=22224.05, stdev=2833.42, samples=106 00:39:33.562 lat (msec) : 2=0.31%, 4=95.06%, 10=4.60%, 20=0.02%, >=2000=0.01% 00:39:33.562 cpu : usr=9.59%, sys=32.87%, ctx=101453, majf=0, minf=15 00:39:33.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:39:33.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:33.562 issued rwts: total=1188530,1187871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.562 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:33.562 00:39:33.562 Run status group 0 (all jobs): 00:39:33.562 READ: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=4643MiB (4868MB), run=60002-60002msec 00:39:33.562 WRITE: bw=77.3MiB/s (81.1MB/s), 77.3MiB/s-77.3MiB/s (81.1MB/s-81.1MB/s), io=4640MiB (4866MB), run=60002-60002msec 00:39:33.562 00:39:33.562 Disk stats (read/write): 00:39:33.562 ublkb1: ios=1186042/1185327, merge=0/0, ticks=3702860/3607593, in_queue=7310453, util=99.96% 00:39:33.562 05:50:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:39:33.562 05:50:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.562 05:50:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.562 [2024-11-20 05:50:45.305641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:39:33.562 [2024-11-20 05:50:45.341890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:39:33.562 [2024-11-20 05:50:45.342129] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:39:33.562 [2024-11-20 05:50:45.350889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:39:33.562 [2024-11-20 05:50:45.351041] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:39:33.562 [2024-11-20 05:50:45.351055] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.563 05:50:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.563 [2024-11-20 05:50:45.365026] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:39:33.563 [2024-11-20 05:50:45.373858] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:39:33.563 [2024-11-20 05:50:45.373926] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:33.563 05:50:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:39:33.563 05:50:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:39:33.563 05:50:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74212 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 74212 ']' 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 74212 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74212 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:33.563 killing process with pid 74212 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74212' 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@971 -- # kill 74212 00:39:33.563 05:50:45 ublk_recovery -- common/autotest_common.sh@976 -- # wait 74212 00:39:33.563 [2024-11-20 05:50:47.314992] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:39:33.563 [2024-11-20 05:50:47.315092] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:39:33.563 00:39:33.563 real 1m7.277s 00:39:33.563 user 1m51.885s 00:39:33.563 sys 0m37.196s 00:39:33.563 05:50:49 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:33.563 05:50:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:39:33.563 ************************************ 00:39:33.563 END TEST ublk_recovery 00:39:33.563 ************************************ 00:39:33.563 05:50:49 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@256 -- # timing_exit lib 00:39:33.563 05:50:49 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:33.563 05:50:49 -- common/autotest_common.sh@10 -- # set +x 00:39:33.563 05:50:49 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:39:33.563 05:50:49 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:39:33.563 05:50:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:39:33.563 05:50:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:33.563 05:50:49 -- common/autotest_common.sh@10 -- # set +x 00:39:33.563 ************************************ 00:39:33.563 START TEST ftl 00:39:33.563 ************************************ 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:39:33.563 * Looking for test storage... 00:39:33.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:33.563 05:50:49 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:33.563 05:50:49 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:39:33.563 05:50:49 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:39:33.563 05:50:49 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:39:33.563 05:50:49 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:33.563 05:50:49 ftl -- scripts/common.sh@344 -- # case "$op" in 00:39:33.563 05:50:49 ftl -- scripts/common.sh@345 -- # : 1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:33.563 05:50:49 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:33.563 05:50:49 ftl -- scripts/common.sh@365 -- # decimal 1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@353 -- # local d=1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:33.563 05:50:49 ftl -- scripts/common.sh@355 -- # echo 1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:39:33.563 05:50:49 ftl -- scripts/common.sh@366 -- # decimal 2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@353 -- # local d=2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:33.563 05:50:49 ftl -- scripts/common.sh@355 -- # echo 2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:39:33.563 05:50:49 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:33.563 05:50:49 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:33.563 05:50:49 ftl -- scripts/common.sh@368 -- # return 0 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:33.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.563 --rc genhtml_branch_coverage=1 00:39:33.563 --rc genhtml_function_coverage=1 00:39:33.563 --rc genhtml_legend=1 00:39:33.563 --rc geninfo_all_blocks=1 00:39:33.563 --rc geninfo_unexecuted_blocks=1 00:39:33.563 00:39:33.563 ' 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:33.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.563 --rc genhtml_branch_coverage=1 00:39:33.563 --rc genhtml_function_coverage=1 00:39:33.563 --rc genhtml_legend=1 00:39:33.563 --rc geninfo_all_blocks=1 00:39:33.563 --rc geninfo_unexecuted_blocks=1 00:39:33.563 00:39:33.563 ' 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:33.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.563 --rc genhtml_branch_coverage=1 00:39:33.563 --rc genhtml_function_coverage=1 00:39:33.563 --rc genhtml_legend=1 00:39:33.563 --rc geninfo_all_blocks=1 00:39:33.563 --rc geninfo_unexecuted_blocks=1 00:39:33.563 00:39:33.563 ' 00:39:33.563 05:50:49 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:33.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:33.563 --rc genhtml_branch_coverage=1 00:39:33.563 --rc genhtml_function_coverage=1 00:39:33.563 --rc genhtml_legend=1 00:39:33.563 --rc geninfo_all_blocks=1 00:39:33.563 --rc geninfo_unexecuted_blocks=1 00:39:33.563 00:39:33.563 ' 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:33.563 05:50:49 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:39:33.563 05:50:49 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:33.563 05:50:49 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:33.563 05:50:49 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:33.563 05:50:49 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:33.563 05:50:49 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:33.563 05:50:49 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:33.563 05:50:49 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:33.563 05:50:49 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:33.563 05:50:49 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:33.563 05:50:49 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:33.563 05:50:49 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:33.563 05:50:49 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:33.563 05:50:49 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:33.563 05:50:49 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:33.563 05:50:49 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:33.563 05:50:49 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:33.563 05:50:49 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:33.563 05:50:49 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:33.563 05:50:49 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:33.563 05:50:49 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:33.563 05:50:49 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:33.563 05:50:49 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:39:33.563 05:50:49 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:33.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:33.564 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:33.564 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:33.564 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:33.564 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:33.564 05:50:50 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:39:33.564 05:50:50 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75025 00:39:33.564 05:50:50 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75025 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@833 -- # '[' -z 75025 ']' 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:33.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:33.564 05:50:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:33.564 [2024-11-20 05:50:50.298098] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:33.564 [2024-11-20 05:50:50.298347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75025 ] 00:39:33.564 [2024-11-20 05:50:50.479559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:33.564 [2024-11-20 05:50:50.623184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.564 05:50:51 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:33.564 05:50:51 ftl -- common/autotest_common.sh@866 -- # return 0 00:39:33.564 05:50:51 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:39:33.564 05:50:51 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:39:33.564 05:50:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:33.564 05:50:52 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:39:33.564 05:50:53 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:39:33.564 05:50:53 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:39:33.564 05:50:53 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@50 -- # break 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@63 -- # break 00:39:33.823 05:50:53 ftl -- ftl/ftl.sh@66 -- # killprocess 75025 00:39:33.823 05:50:53 ftl -- common/autotest_common.sh@952 -- # '[' -z 75025 ']' 00:39:33.823 05:50:53 ftl -- common/autotest_common.sh@956 -- # kill -0 75025 00:39:33.823 05:50:53 ftl -- common/autotest_common.sh@957 -- # uname 00:39:33.823 05:50:53 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:33.823 05:50:53 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75025 00:39:34.083 05:50:53 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:34.083 05:50:53 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:34.083 05:50:53 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75025' 00:39:34.083 killing process with pid 75025 00:39:34.083 05:50:53 ftl -- common/autotest_common.sh@971 -- # kill 75025 00:39:34.083 05:50:53 ftl -- common/autotest_common.sh@976 -- # wait 75025 00:39:36.643 05:50:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:39:36.643 05:50:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:39:36.643 05:50:56 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:39:36.643 05:50:56 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:39:36.643 05:50:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:36.643 ************************************ 00:39:36.643 START TEST ftl_fio_basic 00:39:36.643 ************************************ 00:39:36.643 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:39:36.643 * Looking for test storage... 00:39:36.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:36.643 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.902 --rc genhtml_branch_coverage=1 00:39:36.902 --rc genhtml_function_coverage=1 00:39:36.902 --rc genhtml_legend=1 00:39:36.902 --rc geninfo_all_blocks=1 00:39:36.902 --rc geninfo_unexecuted_blocks=1 00:39:36.902 00:39:36.902 ' 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.902 --rc genhtml_branch_coverage=1 00:39:36.902 --rc genhtml_function_coverage=1 00:39:36.902 --rc genhtml_legend=1 00:39:36.902 --rc geninfo_all_blocks=1 00:39:36.902 --rc geninfo_unexecuted_blocks=1 00:39:36.902 00:39:36.902 ' 00:39:36.902 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:36.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.902 --rc genhtml_branch_coverage=1 00:39:36.902 --rc genhtml_function_coverage=1 00:39:36.902 --rc genhtml_legend=1 00:39:36.902 --rc geninfo_all_blocks=1 00:39:36.903 --rc geninfo_unexecuted_blocks=1 00:39:36.903 00:39:36.903 ' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:36.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.903 --rc genhtml_branch_coverage=1 00:39:36.903 --rc genhtml_function_coverage=1 00:39:36.903 --rc genhtml_legend=1 00:39:36.903 --rc geninfo_all_blocks=1 00:39:36.903 --rc geninfo_unexecuted_blocks=1 00:39:36.903 00:39:36.903 ' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75178 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75178 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 75178 ']' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:39:36.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:39:36.903 05:50:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:36.903 [2024-11-20 05:50:56.817962] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:39:36.903 [2024-11-20 05:50:56.818111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:39:37.162 [2024-11-20 05:50:56.999181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:37.422 [2024-11-20 05:50:57.151338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.422 [2024-11-20 05:50:57.151464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.422 [2024-11-20 05:50:57.151516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:39:38.364 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:38.932 { 00:39:38.932 "name": "nvme0n1", 00:39:38.932 "aliases": [ 00:39:38.932 "4fa32ba0-4b05-4854-ab1e-3d1579e4bfe0" 00:39:38.932 ], 00:39:38.932 "product_name": "NVMe disk", 00:39:38.932 "block_size": 4096, 00:39:38.932 "num_blocks": 1310720, 00:39:38.932 "uuid": "4fa32ba0-4b05-4854-ab1e-3d1579e4bfe0", 00:39:38.932 "numa_id": -1, 00:39:38.932 "assigned_rate_limits": { 00:39:38.932 "rw_ios_per_sec": 0, 00:39:38.932 "rw_mbytes_per_sec": 0, 00:39:38.932 "r_mbytes_per_sec": 0, 00:39:38.932 "w_mbytes_per_sec": 0 00:39:38.932 }, 00:39:38.932 "claimed": false, 00:39:38.932 "zoned": false, 00:39:38.932 "supported_io_types": { 00:39:38.932 "read": true, 00:39:38.932 "write": true, 00:39:38.932 "unmap": true, 00:39:38.932 "flush": true, 00:39:38.932 "reset": true, 00:39:38.932 "nvme_admin": true, 00:39:38.932 "nvme_io": true, 00:39:38.932 "nvme_io_md": false, 00:39:38.932 "write_zeroes": true, 00:39:38.932 "zcopy": false, 00:39:38.932 "get_zone_info": false, 00:39:38.932 "zone_management": false, 00:39:38.932 "zone_append": false, 00:39:38.932 "compare": true, 00:39:38.932 "compare_and_write": false, 00:39:38.932 "abort": true, 00:39:38.932 "seek_hole": false, 00:39:38.932 "seek_data": false, 00:39:38.932 "copy": true, 00:39:38.932 "nvme_iov_md": false 00:39:38.932 }, 00:39:38.932 "driver_specific": { 00:39:38.932 "nvme": [ 00:39:38.932 { 00:39:38.932 "pci_address": "0000:00:11.0", 00:39:38.932 "trid": { 00:39:38.932 "trtype": "PCIe", 00:39:38.932 "traddr": "0000:00:11.0" 00:39:38.932 }, 00:39:38.932 "ctrlr_data": { 00:39:38.932 "cntlid": 0, 00:39:38.932 "vendor_id": "0x1b36", 00:39:38.932 "model_number": "QEMU NVMe Ctrl", 00:39:38.932 "serial_number": "12341", 00:39:38.932 "firmware_revision": "8.0.0", 00:39:38.932 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:38.932 "oacs": { 00:39:38.932 "security": 0, 00:39:38.932 "format": 1, 00:39:38.932 "firmware": 0, 00:39:38.932 "ns_manage": 1 00:39:38.932 }, 00:39:38.932 "multi_ctrlr": false, 00:39:38.932 "ana_reporting": false 00:39:38.932 }, 00:39:38.932 "vs": { 00:39:38.932 "nvme_version": "1.4" 00:39:38.932 }, 00:39:38.932 "ns_data": { 00:39:38.932 "id": 1, 00:39:38.932 "can_share": false 00:39:38.932 } 00:39:38.932 } 00:39:38.932 ], 00:39:38.932 "mp_policy": "active_passive" 00:39:38.932 } 00:39:38.932 } 00:39:38.932 ]' 00:39:38.932 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:39.191 05:50:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:39.451 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:39:39.451 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:39.709 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1293bda4-6beb-4415-8a84-2942011d2efd 00:39:39.709 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1293bda4-6beb-4415-8a84-2942011d2efd 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=74b29946-d529-4782-a25d-f9149e0b8023 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 74b29946-d529-4782-a25d-f9149e0b8023 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=74b29946-d529-4782-a25d-f9149e0b8023 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 74b29946-d529-4782-a25d-f9149e0b8023 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=74b29946-d529-4782-a25d-f9149e0b8023 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:39:39.968 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b29946-d529-4782-a25d-f9149e0b8023 00:39:40.227 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:40.227 { 00:39:40.227 "name": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:40.227 "aliases": [ 00:39:40.227 "lvs/nvme0n1p0" 00:39:40.227 ], 00:39:40.227 "product_name": "Logical Volume", 00:39:40.227 "block_size": 4096, 00:39:40.227 "num_blocks": 26476544, 00:39:40.227 "uuid": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:40.227 "assigned_rate_limits": { 00:39:40.227 "rw_ios_per_sec": 0, 00:39:40.227 "rw_mbytes_per_sec": 0, 00:39:40.227 "r_mbytes_per_sec": 0, 00:39:40.227 "w_mbytes_per_sec": 0 00:39:40.227 }, 00:39:40.227 "claimed": false, 00:39:40.227 "zoned": false, 00:39:40.227 "supported_io_types": { 00:39:40.227 "read": true, 00:39:40.227 "write": true, 00:39:40.227 "unmap": true, 00:39:40.227 "flush": false, 00:39:40.227 "reset": true, 00:39:40.227 "nvme_admin": false, 00:39:40.227 "nvme_io": false, 00:39:40.227 "nvme_io_md": false, 00:39:40.227 "write_zeroes": true, 00:39:40.227 "zcopy": false, 00:39:40.227 "get_zone_info": false, 00:39:40.227 "zone_management": false, 00:39:40.227 "zone_append": false, 00:39:40.227 "compare": false, 00:39:40.227 "compare_and_write": false, 00:39:40.227 "abort": false, 00:39:40.227 "seek_hole": true, 00:39:40.227 "seek_data": true, 00:39:40.227 "copy": false, 00:39:40.227 "nvme_iov_md": false 00:39:40.227 }, 00:39:40.227 "driver_specific": { 00:39:40.227 "lvol": { 00:39:40.227 "lvol_store_uuid": "1293bda4-6beb-4415-8a84-2942011d2efd", 00:39:40.227 "base_bdev": "nvme0n1", 00:39:40.227 "thin_provision": true, 00:39:40.227 "num_allocated_clusters": 0, 00:39:40.227 "snapshot": false, 00:39:40.227 "clone": false, 00:39:40.227 "esnap_clone": false 00:39:40.227 } 00:39:40.227 } 00:39:40.227 } 00:39:40.227 ]' 00:39:40.227 05:50:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:39:40.227 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 74b29946-d529-4782-a25d-f9149e0b8023 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=74b29946-d529-4782-a25d-f9149e0b8023 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:39:40.486 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b29946-d529-4782-a25d-f9149e0b8023 00:39:40.744 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:40.744 { 00:39:40.744 "name": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:40.744 "aliases": [ 00:39:40.744 "lvs/nvme0n1p0" 00:39:40.744 ], 00:39:40.744 "product_name": "Logical Volume", 00:39:40.744 "block_size": 4096, 00:39:40.744 "num_blocks": 26476544, 00:39:40.744 "uuid": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:40.744 "assigned_rate_limits": { 00:39:40.744 "rw_ios_per_sec": 0, 00:39:40.744 "rw_mbytes_per_sec": 0, 00:39:40.744 "r_mbytes_per_sec": 0, 00:39:40.744 "w_mbytes_per_sec": 0 00:39:40.744 }, 00:39:40.744 "claimed": false, 00:39:40.744 "zoned": false, 00:39:40.744 "supported_io_types": { 00:39:40.744 "read": true, 00:39:40.744 "write": true, 00:39:40.744 "unmap": true, 00:39:40.744 "flush": false, 00:39:40.745 "reset": true, 00:39:40.745 "nvme_admin": false, 00:39:40.745 "nvme_io": false, 00:39:40.745 "nvme_io_md": false, 00:39:40.745 "write_zeroes": true, 00:39:40.745 "zcopy": false, 00:39:40.745 "get_zone_info": false, 00:39:40.745 "zone_management": false, 00:39:40.745 "zone_append": false, 00:39:40.745 "compare": false, 00:39:40.745 "compare_and_write": false, 00:39:40.745 "abort": false, 00:39:40.745 "seek_hole": true, 00:39:40.745 "seek_data": true, 00:39:40.745 "copy": false, 00:39:40.745 "nvme_iov_md": false 00:39:40.745 }, 00:39:40.745 "driver_specific": { 00:39:40.745 "lvol": { 00:39:40.745 "lvol_store_uuid": "1293bda4-6beb-4415-8a84-2942011d2efd", 00:39:40.745 "base_bdev": "nvme0n1", 00:39:40.745 "thin_provision": true, 00:39:40.745 "num_allocated_clusters": 0, 00:39:40.745 "snapshot": false, 00:39:40.745 "clone": false, 00:39:40.745 "esnap_clone": false 00:39:40.745 } 00:39:40.745 } 00:39:40.745 } 00:39:40.745 ]' 00:39:40.745 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:39:41.021 05:51:00 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:39:41.293 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 74b29946-d529-4782-a25d-f9149e0b8023 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=74b29946-d529-4782-a25d-f9149e0b8023 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:39:41.293 05:51:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 74b29946-d529-4782-a25d-f9149e0b8023 00:39:41.552 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:39:41.552 { 00:39:41.552 "name": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:41.552 "aliases": [ 00:39:41.552 "lvs/nvme0n1p0" 00:39:41.552 ], 00:39:41.552 "product_name": "Logical Volume", 00:39:41.552 "block_size": 4096, 00:39:41.552 "num_blocks": 26476544, 00:39:41.552 "uuid": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:41.552 "assigned_rate_limits": { 00:39:41.552 "rw_ios_per_sec": 0, 00:39:41.552 "rw_mbytes_per_sec": 0, 00:39:41.552 "r_mbytes_per_sec": 0, 00:39:41.552 "w_mbytes_per_sec": 0 00:39:41.552 }, 00:39:41.552 "claimed": false, 00:39:41.552 "zoned": false, 00:39:41.552 "supported_io_types": { 00:39:41.552 "read": true, 00:39:41.553 "write": true, 00:39:41.553 "unmap": true, 00:39:41.553 "flush": false, 00:39:41.553 "reset": true, 00:39:41.553 "nvme_admin": false, 00:39:41.553 "nvme_io": false, 00:39:41.553 "nvme_io_md": false, 00:39:41.553 "write_zeroes": true, 00:39:41.553 "zcopy": false, 00:39:41.553 "get_zone_info": false, 00:39:41.553 "zone_management": false, 00:39:41.553 "zone_append": false, 00:39:41.553 "compare": false, 00:39:41.553 "compare_and_write": false, 00:39:41.553 "abort": false, 00:39:41.553 "seek_hole": true, 00:39:41.553 "seek_data": true, 00:39:41.553 "copy": false, 00:39:41.553 "nvme_iov_md": false 00:39:41.553 }, 00:39:41.553 "driver_specific": { 00:39:41.553 "lvol": { 00:39:41.553 "lvol_store_uuid": "1293bda4-6beb-4415-8a84-2942011d2efd", 00:39:41.553 "base_bdev": "nvme0n1", 00:39:41.553 "thin_provision": true, 00:39:41.553 "num_allocated_clusters": 0, 00:39:41.553 "snapshot": false, 00:39:41.553 "clone": false, 00:39:41.553 "esnap_clone": false 00:39:41.553 } 00:39:41.553 } 00:39:41.553 } 00:39:41.553 ]' 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:39:41.553 05:51:01 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 74b29946-d529-4782-a25d-f9149e0b8023 -c nvc0n1p0 --l2p_dram_limit 60 00:39:41.813 [2024-11-20 05:51:01.528049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.528124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:41.813 [2024-11-20 05:51:01.528144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:41.813 [2024-11-20 05:51:01.528153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.528289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.528306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:41.813 [2024-11-20 05:51:01.528322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:41.813 [2024-11-20 05:51:01.528335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.528383] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:41.813 [2024-11-20 05:51:01.529612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:41.813 [2024-11-20 05:51:01.529650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.529660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:41.813 [2024-11-20 05:51:01.529672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.273 ms 00:39:41.813 [2024-11-20 05:51:01.529680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.529781] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7100c5b8-d8fd-4851-ab29-906e45a335eb 00:39:41.813 [2024-11-20 05:51:01.532547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.532587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:39:41.813 [2024-11-20 05:51:01.532599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:39:41.813 [2024-11-20 05:51:01.532610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.547870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.547933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:41.813 [2024-11-20 05:51:01.547947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.167 ms 00:39:41.813 [2024-11-20 05:51:01.547959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.548136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.548157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:41.813 [2024-11-20 05:51:01.548167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:39:41.813 [2024-11-20 05:51:01.548185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.548268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.548282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:41.813 [2024-11-20 05:51:01.548292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:41.813 [2024-11-20 05:51:01.548303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.548346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:41.813 [2024-11-20 05:51:01.555040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.555080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:41.813 [2024-11-20 05:51:01.555096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.715 ms 00:39:41.813 [2024-11-20 05:51:01.555107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.555163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.555173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:41.813 [2024-11-20 05:51:01.555185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:41.813 [2024-11-20 05:51:01.555194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.555246] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:39:41.813 [2024-11-20 05:51:01.555415] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:41.813 [2024-11-20 05:51:01.555462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:41.813 [2024-11-20 05:51:01.555476] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:41.813 [2024-11-20 05:51:01.555490] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:41.813 [2024-11-20 05:51:01.555501] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:41.813 [2024-11-20 05:51:01.555513] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:39:41.813 [2024-11-20 05:51:01.555522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:41.813 [2024-11-20 05:51:01.555533] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:41.813 [2024-11-20 05:51:01.555542] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:41.813 [2024-11-20 05:51:01.555554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.555567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:41.813 [2024-11-20 05:51:01.555580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:39:41.813 [2024-11-20 05:51:01.555589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.555711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.813 [2024-11-20 05:51:01.555730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:41.813 [2024-11-20 05:51:01.555744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:39:41.813 [2024-11-20 05:51:01.555753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.813 [2024-11-20 05:51:01.555908] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:41.813 [2024-11-20 05:51:01.555928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:41.813 [2024-11-20 05:51:01.555944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:41.813 [2024-11-20 05:51:01.555952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.813 [2024-11-20 05:51:01.555965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:41.813 [2024-11-20 05:51:01.555972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:41.813 [2024-11-20 05:51:01.555982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:39:41.813 [2024-11-20 05:51:01.555991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:41.813 [2024-11-20 05:51:01.556003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:41.813 [2024-11-20 05:51:01.556021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:41.813 [2024-11-20 05:51:01.556029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:39:41.813 [2024-11-20 05:51:01.556039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:41.813 [2024-11-20 05:51:01.556047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:41.813 [2024-11-20 05:51:01.556058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:39:41.813 [2024-11-20 05:51:01.556066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:41.813 [2024-11-20 05:51:01.556094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:39:41.813 [2024-11-20 05:51:01.556104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:41.813 [2024-11-20 05:51:01.556123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.813 [2024-11-20 05:51:01.556141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:41.813 [2024-11-20 05:51:01.556149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.813 [2024-11-20 05:51:01.556166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:41.813 [2024-11-20 05:51:01.556176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.813 [2024-11-20 05:51:01.556194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:41.813 [2024-11-20 05:51:01.556202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.813 [2024-11-20 05:51:01.556220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:41.813 [2024-11-20 05:51:01.556233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:41.813 [2024-11-20 05:51:01.556250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:41.813 [2024-11-20 05:51:01.556277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:39:41.813 [2024-11-20 05:51:01.556288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:41.813 [2024-11-20 05:51:01.556296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:41.813 [2024-11-20 05:51:01.556308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:39:41.813 [2024-11-20 05:51:01.556315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.813 [2024-11-20 05:51:01.556325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:41.813 [2024-11-20 05:51:01.556333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:39:41.813 [2024-11-20 05:51:01.556344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.814 [2024-11-20 05:51:01.556352] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:41.814 [2024-11-20 05:51:01.556363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:41.814 [2024-11-20 05:51:01.556372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:41.814 [2024-11-20 05:51:01.556383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.814 [2024-11-20 05:51:01.556392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:41.814 [2024-11-20 05:51:01.556407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:41.814 [2024-11-20 05:51:01.556415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:41.814 [2024-11-20 05:51:01.556425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:41.814 [2024-11-20 05:51:01.556433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:41.814 [2024-11-20 05:51:01.556443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:41.814 [2024-11-20 05:51:01.556456] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:41.814 [2024-11-20 05:51:01.556470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:39:41.814 [2024-11-20 05:51:01.556492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:39:41.814 [2024-11-20 05:51:01.556500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:39:41.814 [2024-11-20 05:51:01.556511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:39:41.814 [2024-11-20 05:51:01.556519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:39:41.814 [2024-11-20 05:51:01.556529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:39:41.814 [2024-11-20 05:51:01.556537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:39:41.814 [2024-11-20 05:51:01.556547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:39:41.814 [2024-11-20 05:51:01.556555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:39:41.814 [2024-11-20 05:51:01.556569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:39:41.814 [2024-11-20 05:51:01.556616] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:41.814 [2024-11-20 05:51:01.556644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:41.814 [2024-11-20 05:51:01.556670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:41.814 [2024-11-20 05:51:01.556678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:41.814 [2024-11-20 05:51:01.556690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:41.814 [2024-11-20 05:51:01.556700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.814 [2024-11-20 05:51:01.556713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:41.814 [2024-11-20 05:51:01.556722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.862 ms 00:39:41.814 [2024-11-20 05:51:01.556733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.814 [2024-11-20 05:51:01.556845] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:39:41.814 [2024-11-20 05:51:01.556864] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:39:47.086 [2024-11-20 05:51:06.141958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.142047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:39:47.086 [2024-11-20 05:51:06.142064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4593.955 ms 00:39:47.086 [2024-11-20 05:51:06.142076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.191879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.191940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:47.086 [2024-11-20 05:51:06.191955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.488 ms 00:39:47.086 [2024-11-20 05:51:06.191967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.192189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.192213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:47.086 [2024-11-20 05:51:06.192223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:39:47.086 [2024-11-20 05:51:06.192237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.264032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.264104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:47.086 [2024-11-20 05:51:06.264137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.870 ms 00:39:47.086 [2024-11-20 05:51:06.264150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.264221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.264233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:47.086 [2024-11-20 05:51:06.264242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:47.086 [2024-11-20 05:51:06.264253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.265160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.265190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:47.086 [2024-11-20 05:51:06.265200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:39:47.086 [2024-11-20 05:51:06.265214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.265355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.265378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:47.086 [2024-11-20 05:51:06.265387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:39:47.086 [2024-11-20 05:51:06.265401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.292350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.292416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:47.086 [2024-11-20 05:51:06.292431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.964 ms 00:39:47.086 [2024-11-20 05:51:06.292443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.309439] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:47.086 [2024-11-20 05:51:06.337888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.337991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:47.086 [2024-11-20 05:51:06.338013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.363 ms 00:39:47.086 [2024-11-20 05:51:06.338025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.428958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.429049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:39:47.086 [2024-11-20 05:51:06.429073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.038 ms 00:39:47.086 [2024-11-20 05:51:06.429082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.086 [2024-11-20 05:51:06.429309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.086 [2024-11-20 05:51:06.429326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:47.086 [2024-11-20 05:51:06.429342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:39:47.086 [2024-11-20 05:51:06.429350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.465115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.465166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:39:47.087 [2024-11-20 05:51:06.465181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.774 ms 00:39:47.087 [2024-11-20 05:51:06.465190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.499969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.500036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:39:47.087 [2024-11-20 05:51:06.500052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.788 ms 00:39:47.087 [2024-11-20 05:51:06.500077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.500888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.500918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:47.087 [2024-11-20 05:51:06.500930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:39:47.087 [2024-11-20 05:51:06.500938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.611229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.611340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:39:47.087 [2024-11-20 05:51:06.611365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.421 ms 00:39:47.087 [2024-11-20 05:51:06.611379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.655106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.655207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:39:47.087 [2024-11-20 05:51:06.655243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.607 ms 00:39:47.087 [2024-11-20 05:51:06.655252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.700614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.700704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:39:47.087 [2024-11-20 05:51:06.700723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.324 ms 00:39:47.087 [2024-11-20 05:51:06.700731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.737977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.738046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:47.087 [2024-11-20 05:51:06.738065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.243 ms 00:39:47.087 [2024-11-20 05:51:06.738074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.738136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.738147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:47.087 [2024-11-20 05:51:06.738167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:47.087 [2024-11-20 05:51:06.738175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.738311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.087 [2024-11-20 05:51:06.738332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:47.087 [2024-11-20 05:51:06.738344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:39:47.087 [2024-11-20 05:51:06.738352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.087 [2024-11-20 05:51:06.740057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5221.439 ms, result 0 00:39:47.087 { 00:39:47.087 "name": "ftl0", 00:39:47.087 "uuid": "7100c5b8-d8fd-4851-ab29-906e45a335eb" 00:39:47.087 } 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:39:47.087 05:51:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:47.345 05:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:39:47.345 [ 00:39:47.345 { 00:39:47.345 "name": "ftl0", 00:39:47.345 "aliases": [ 00:39:47.345 "7100c5b8-d8fd-4851-ab29-906e45a335eb" 00:39:47.345 ], 00:39:47.345 "product_name": "FTL disk", 00:39:47.345 "block_size": 4096, 00:39:47.345 "num_blocks": 20971520, 00:39:47.345 "uuid": "7100c5b8-d8fd-4851-ab29-906e45a335eb", 00:39:47.345 "assigned_rate_limits": { 00:39:47.345 "rw_ios_per_sec": 0, 00:39:47.345 "rw_mbytes_per_sec": 0, 00:39:47.345 "r_mbytes_per_sec": 0, 00:39:47.345 "w_mbytes_per_sec": 0 00:39:47.345 }, 00:39:47.345 "claimed": false, 00:39:47.345 "zoned": false, 00:39:47.345 "supported_io_types": { 00:39:47.345 "read": true, 00:39:47.345 "write": true, 00:39:47.345 "unmap": true, 00:39:47.345 "flush": true, 00:39:47.345 "reset": false, 00:39:47.345 "nvme_admin": false, 00:39:47.345 "nvme_io": false, 00:39:47.345 "nvme_io_md": false, 00:39:47.345 "write_zeroes": true, 00:39:47.345 "zcopy": false, 00:39:47.345 "get_zone_info": false, 00:39:47.345 "zone_management": false, 00:39:47.345 "zone_append": false, 00:39:47.345 "compare": false, 00:39:47.345 "compare_and_write": false, 00:39:47.345 "abort": false, 00:39:47.345 "seek_hole": false, 00:39:47.345 "seek_data": false, 00:39:47.345 "copy": false, 00:39:47.345 "nvme_iov_md": false 00:39:47.345 }, 00:39:47.345 "driver_specific": { 00:39:47.345 "ftl": { 00:39:47.345 "base_bdev": "74b29946-d529-4782-a25d-f9149e0b8023", 00:39:47.345 "cache": "nvc0n1p0" 00:39:47.345 } 00:39:47.345 } 00:39:47.345 } 00:39:47.345 ] 00:39:47.345 05:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:39:47.345 05:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:39:47.345 05:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:39:47.603 05:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:39:47.603 05:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:39:47.862 [2024-11-20 05:51:07.690877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.690957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:47.862 [2024-11-20 05:51:07.690975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:47.862 [2024-11-20 05:51:07.690986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.691026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:47.862 [2024-11-20 05:51:07.696214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.696249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:47.862 [2024-11-20 05:51:07.696264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:39:47.862 [2024-11-20 05:51:07.696289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.696776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.696811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:47.862 [2024-11-20 05:51:07.696826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:39:47.862 [2024-11-20 05:51:07.696835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.699534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.699561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:47.862 [2024-11-20 05:51:07.699573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:39:47.862 [2024-11-20 05:51:07.699581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.704811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.704848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:47.862 [2024-11-20 05:51:07.704861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.197 ms 00:39:47.862 [2024-11-20 05:51:07.704869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.746686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.746766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:47.862 [2024-11-20 05:51:07.746801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.807 ms 00:39:47.862 [2024-11-20 05:51:07.746810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.772125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.772183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:47.862 [2024-11-20 05:51:07.772207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.263 ms 00:39:47.862 [2024-11-20 05:51:07.772217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:47.862 [2024-11-20 05:51:07.772513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:47.862 [2024-11-20 05:51:07.772536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:47.862 [2024-11-20 05:51:07.772550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:39:47.862 [2024-11-20 05:51:07.772560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.121 [2024-11-20 05:51:07.816101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.121 [2024-11-20 05:51:07.816158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:48.121 [2024-11-20 05:51:07.816176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.590 ms 00:39:48.121 [2024-11-20 05:51:07.816185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.121 [2024-11-20 05:51:07.858366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.121 [2024-11-20 05:51:07.858423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:48.121 [2024-11-20 05:51:07.858439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.197 ms 00:39:48.121 [2024-11-20 05:51:07.858448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.121 [2024-11-20 05:51:07.900463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.121 [2024-11-20 05:51:07.900520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:48.121 [2024-11-20 05:51:07.900537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.028 ms 00:39:48.121 [2024-11-20 05:51:07.900546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.121 [2024-11-20 05:51:07.943114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.121 [2024-11-20 05:51:07.943184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:48.121 [2024-11-20 05:51:07.943218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.499 ms 00:39:48.121 [2024-11-20 05:51:07.943228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.121 [2024-11-20 05:51:07.943296] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:48.121 [2024-11-20 05:51:07.943315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:48.121 [2024-11-20 05:51:07.943992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:48.122 [2024-11-20 05:51:07.944552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:48.122 [2024-11-20 05:51:07.944565] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7100c5b8-d8fd-4851-ab29-906e45a335eb 00:39:48.122 [2024-11-20 05:51:07.944576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:48.122 [2024-11-20 05:51:07.944591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:48.122 [2024-11-20 05:51:07.944601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:48.122 [2024-11-20 05:51:07.944618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:48.122 [2024-11-20 05:51:07.944628] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:48.122 [2024-11-20 05:51:07.944640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:48.122 [2024-11-20 05:51:07.944650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:48.122 [2024-11-20 05:51:07.944661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:48.122 [2024-11-20 05:51:07.944669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:48.122 [2024-11-20 05:51:07.944681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.122 [2024-11-20 05:51:07.944690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:48.122 [2024-11-20 05:51:07.944703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:39:48.122 [2024-11-20 05:51:07.944713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.122 [2024-11-20 05:51:07.970736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.122 [2024-11-20 05:51:07.970837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:48.122 [2024-11-20 05:51:07.970855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.983 ms 00:39:48.122 [2024-11-20 05:51:07.970865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.122 [2024-11-20 05:51:07.971557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:48.122 [2024-11-20 05:51:07.971574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:48.122 [2024-11-20 05:51:07.971587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:39:48.122 [2024-11-20 05:51:07.971597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.381 [2024-11-20 05:51:08.057436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.381 [2024-11-20 05:51:08.057525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:48.381 [2024-11-20 05:51:08.057560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.381 [2024-11-20 05:51:08.057571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.381 [2024-11-20 05:51:08.057674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.381 [2024-11-20 05:51:08.057688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:48.381 [2024-11-20 05:51:08.057701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.381 [2024-11-20 05:51:08.057710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.381 [2024-11-20 05:51:08.057891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.381 [2024-11-20 05:51:08.057911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:48.381 [2024-11-20 05:51:08.057924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.381 [2024-11-20 05:51:08.057935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.381 [2024-11-20 05:51:08.057980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.381 [2024-11-20 05:51:08.057990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:48.381 [2024-11-20 05:51:08.058002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.381 [2024-11-20 05:51:08.058012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.381 [2024-11-20 05:51:08.226762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.381 [2024-11-20 05:51:08.226857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:48.381 [2024-11-20 05:51:08.226877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.381 [2024-11-20 05:51:08.226887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.641 [2024-11-20 05:51:08.360261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.641 [2024-11-20 05:51:08.360346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:48.641 [2024-11-20 05:51:08.360381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.641 [2024-11-20 05:51:08.360392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.641 [2024-11-20 05:51:08.360547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.641 [2024-11-20 05:51:08.360567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:48.641 [2024-11-20 05:51:08.360586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.641 [2024-11-20 05:51:08.360596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.641 [2024-11-20 05:51:08.360701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.641 [2024-11-20 05:51:08.360721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:48.641 [2024-11-20 05:51:08.360735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.641 [2024-11-20 05:51:08.360744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.641 [2024-11-20 05:51:08.360934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.641 [2024-11-20 05:51:08.360955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:48.641 [2024-11-20 05:51:08.360968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.641 [2024-11-20 05:51:08.360982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.641 [2024-11-20 05:51:08.361068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.641 [2024-11-20 05:51:08.361086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:48.641 [2024-11-20 05:51:08.361100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.641 [2024-11-20 05:51:08.361110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.642 [2024-11-20 05:51:08.361172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.642 [2024-11-20 05:51:08.361188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:48.642 [2024-11-20 05:51:08.361201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.642 [2024-11-20 05:51:08.361210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.642 [2024-11-20 05:51:08.361284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:48.642 [2024-11-20 05:51:08.361299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:48.642 [2024-11-20 05:51:08.361312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:48.642 [2024-11-20 05:51:08.361322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:48.642 [2024-11-20 05:51:08.361555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 671.912 ms, result 0 00:39:48.642 true 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75178 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 75178 ']' 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 75178 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75178 00:39:48.642 killing process with pid 75178 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75178' 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 75178 00:39:48.642 05:51:08 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 75178 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:39:56.763 05:51:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:39:56.763 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:39:56.763 fio-3.35 00:39:56.763 Starting 1 thread 00:40:02.039 00:40:02.039 test: (groupid=0, jobs=1): err= 0: pid=75460: Wed Nov 20 05:51:21 2024 00:40:02.039 read: IOPS=1038, BW=68.9MiB/s (72.3MB/s)(255MiB/3692msec) 00:40:02.039 slat (usec): min=4, max=171, avg= 7.50, stdev= 4.41 00:40:02.039 clat (usec): min=300, max=1057, avg=427.25, stdev=59.23 00:40:02.039 lat (usec): min=307, max=1063, avg=434.75, stdev=59.82 00:40:02.039 clat percentiles (usec): 00:40:02.039 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 392], 00:40:02.039 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 433], 00:40:02.039 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 529], 00:40:02.039 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 676], 99.95th=[ 693], 00:40:02.039 | 99.99th=[ 1057] 00:40:02.039 write: IOPS=1045, BW=69.4MiB/s (72.8MB/s)(256MiB/3688msec); 0 zone resets 00:40:02.039 slat (usec): min=15, max=113, avg=23.81, stdev= 8.14 00:40:02.039 clat (usec): min=330, max=927, avg=488.68, stdev=67.38 00:40:02.039 lat (usec): min=361, max=953, avg=512.49, stdev=67.98 00:40:02.039 clat percentiles (usec): 00:40:02.039 | 1.00th=[ 367], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 433], 00:40:02.039 | 30.00th=[ 441], 40.00th=[ 461], 50.00th=[ 486], 60.00th=[ 498], 00:40:02.039 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 603], 00:40:02.039 | 99.00th=[ 709], 99.50th=[ 791], 99.90th=[ 873], 99.95th=[ 898], 00:40:02.039 | 99.99th=[ 930] 00:40:02.039 bw ( KiB/s): min=68136, max=74120, per=100.00%, avg=71264.00, stdev=2084.84, samples=7 00:40:02.039 iops : min= 1002, max= 1090, avg=1048.00, stdev=30.66, samples=7 00:40:02.039 lat (usec) : 500=75.55%, 750=24.09%, 1000=0.35% 00:40:02.039 lat (msec) : 2=0.01% 00:40:02.039 cpu : usr=99.08%, sys=0.24%, ctx=7, majf=0, minf=1169 00:40:02.039 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.039 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.039 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.039 00:40:02.039 Run status group 0 (all jobs): 00:40:02.039 READ: bw=68.9MiB/s (72.3MB/s), 68.9MiB/s-68.9MiB/s (72.3MB/s-72.3MB/s), io=255MiB (267MB), run=3692-3692msec 00:40:02.039 WRITE: bw=69.4MiB/s (72.8MB/s), 69.4MiB/s-69.4MiB/s (72.8MB/s-72.8MB/s), io=256MiB (269MB), run=3688-3688msec 00:40:03.412 ----------------------------------------------------- 00:40:03.412 Suppressions used: 00:40:03.412 count bytes template 00:40:03.412 1 5 /usr/src/fio/parse.c 00:40:03.412 1 8 libtcmalloc_minimal.so 00:40:03.412 1 904 libcrypto.so 00:40:03.412 ----------------------------------------------------- 00:40:03.412 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:40:03.412 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:03.413 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:03.671 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:03.671 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:03.671 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:40:03.671 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:03.671 05:51:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:40:03.671 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:40:03.671 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:40:03.671 fio-3.35 00:40:03.671 Starting 2 threads 00:40:35.795 00:40:35.795 first_half: (groupid=0, jobs=1): err= 0: pid=75574: Wed Nov 20 05:51:51 2024 00:40:35.795 read: IOPS=2493, BW=9973KiB/s (10.2MB/s)(255MiB/26168msec) 00:40:35.795 slat (nsec): min=4107, max=35222, avg=6944.29, stdev=1549.16 00:40:35.795 clat (usec): min=940, max=312139, avg=37852.76, stdev=22255.81 00:40:35.795 lat (usec): min=947, max=312147, avg=37859.71, stdev=22256.07 00:40:35.795 clat percentiles (msec): 00:40:35.795 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:40:35.795 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:40:35.795 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 50], 00:40:35.795 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 249], 99.95th=[ 279], 00:40:35.795 | 99.99th=[ 305] 00:40:35.795 write: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(256MiB/20863msec); 0 zone resets 00:40:35.795 slat (usec): min=4, max=682, avg= 9.56, stdev= 6.81 00:40:35.795 clat (usec): min=427, max=106920, avg=13371.17, stdev=22672.74 00:40:35.795 lat (usec): min=446, max=106927, avg=13380.73, stdev=22672.89 00:40:35.795 clat percentiles (usec): 00:40:35.795 | 1.00th=[ 1156], 5.00th=[ 1516], 10.00th=[ 1745], 20.00th=[ 2073], 00:40:35.795 | 30.00th=[ 3228], 40.00th=[ 5473], 50.00th=[ 6718], 60.00th=[ 7439], 00:40:35.795 | 70.00th=[ 8586], 80.00th=[ 12518], 90.00th=[ 17171], 95.00th=[ 83362], 00:40:35.795 | 99.00th=[ 90702], 99.50th=[100140], 99.90th=[104334], 99.95th=[105382], 00:40:35.795 | 99.99th=[106431] 00:40:35.795 bw ( KiB/s): min= 1472, max=40808, per=83.44%, avg=20968.88, stdev=11021.68, samples=25 00:40:35.795 iops : min= 368, max=10202, avg=5242.20, stdev=2755.40, samples=25 00:40:35.795 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.13% 00:40:35.795 lat (msec) : 2=9.00%, 4=7.87%, 10=21.30%, 20=7.91%, 50=46.88% 00:40:35.795 lat (msec) : 100=5.20%, 250=1.62%, 500=0.05% 00:40:35.795 cpu : usr=99.25%, sys=0.15%, ctx=53, majf=0, minf=5549 00:40:35.795 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:40:35.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.795 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.795 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.795 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.795 second_half: (groupid=0, jobs=1): err= 0: pid=75575: Wed Nov 20 05:51:51 2024 00:40:35.795 read: IOPS=2481, BW=9925KiB/s (10.2MB/s)(255MiB/26274msec) 00:40:35.795 slat (nsec): min=4115, max=50716, avg=6927.67, stdev=1512.22 00:40:35.795 clat (usec): min=936, max=317707, avg=37499.17, stdev=21810.13 00:40:35.795 lat (usec): min=946, max=317715, avg=37506.10, stdev=21810.38 00:40:35.795 clat percentiles (msec): 00:40:35.795 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:40:35.795 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:40:35.795 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 40], 95.00th=[ 51], 00:40:35.795 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 232], 99.95th=[ 257], 00:40:35.795 | 99.99th=[ 309] 00:40:35.795 write: IOPS=3348, BW=13.1MiB/s (13.7MB/s)(256MiB/19573msec); 0 zone resets 00:40:35.795 slat (usec): min=4, max=700, avg= 9.64, stdev= 6.03 00:40:35.795 clat (usec): min=446, max=107567, avg=13961.60, stdev=23218.63 00:40:35.795 lat (usec): min=467, max=107579, avg=13971.25, stdev=23218.75 00:40:35.795 clat percentiles (usec): 00:40:35.795 | 1.00th=[ 1106], 5.00th=[ 1434], 10.00th=[ 1680], 20.00th=[ 1975], 00:40:35.795 | 30.00th=[ 2704], 40.00th=[ 5145], 50.00th=[ 6587], 60.00th=[ 7635], 00:40:35.795 | 70.00th=[ 9372], 80.00th=[ 13435], 90.00th=[ 35390], 95.00th=[ 84411], 00:40:35.795 | 99.00th=[ 91751], 99.50th=[101188], 99.90th=[105382], 99.95th=[106431], 00:40:35.795 | 99.99th=[106431] 00:40:35.795 bw ( KiB/s): min= 1856, max=42928, per=83.45%, avg=20971.52, stdev=11870.73, samples=25 00:40:35.795 iops : min= 464, max=10732, avg=5242.88, stdev=2967.68, samples=25 00:40:35.796 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.17% 00:40:35.796 lat (msec) : 2=10.44%, 4=7.21%, 10=19.48%, 20=8.84%, 50=46.70% 00:40:35.796 lat (msec) : 100=5.42%, 250=1.67%, 500=0.03% 00:40:35.796 cpu : usr=99.29%, sys=0.18%, ctx=37, majf=0, minf=5570 00:40:35.796 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:40:35.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:35.796 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:35.796 issued rwts: total=65194,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:35.796 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:35.796 00:40:35.796 Run status group 0 (all jobs): 00:40:35.796 READ: bw=19.4MiB/s (20.3MB/s), 9925KiB/s-9973KiB/s (10.2MB/s-10.2MB/s), io=510MiB (534MB), run=26168-26274msec 00:40:35.796 WRITE: bw=24.5MiB/s (25.7MB/s), 12.3MiB/s-13.1MiB/s (12.9MB/s-13.7MB/s), io=512MiB (537MB), run=19573-20863msec 00:40:35.796 ----------------------------------------------------- 00:40:35.796 Suppressions used: 00:40:35.796 count bytes template 00:40:35.796 2 10 /usr/src/fio/parse.c 00:40:35.796 2 192 /usr/src/fio/iolog.c 00:40:35.796 1 8 libtcmalloc_minimal.so 00:40:35.796 1 904 libcrypto.so 00:40:35.796 ----------------------------------------------------- 00:40:35.796 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:40:35.796 05:51:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:40:35.796 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:40:35.796 fio-3.35 00:40:35.796 Starting 1 thread 00:40:50.727 00:40:50.727 test: (groupid=0, jobs=1): err= 0: pid=75916: Wed Nov 20 05:52:10 2024 00:40:50.727 read: IOPS=7603, BW=29.7MiB/s (31.1MB/s)(255MiB/8575msec) 00:40:50.727 slat (nsec): min=4003, max=26077, avg=6114.97, stdev=1256.92 00:40:50.727 clat (usec): min=729, max=32882, avg=16822.81, stdev=926.07 00:40:50.727 lat (usec): min=733, max=32889, avg=16828.92, stdev=926.07 00:40:50.727 clat percentiles (usec): 00:40:50.727 | 1.00th=[15795], 5.00th=[16057], 10.00th=[16188], 20.00th=[16450], 00:40:50.727 | 30.00th=[16581], 40.00th=[16581], 50.00th=[16712], 60.00th=[16909], 00:40:50.727 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17433], 00:40:50.727 | 99.00th=[19268], 99.50th=[20317], 99.90th=[28967], 99.95th=[29492], 00:40:50.727 | 99.99th=[32113] 00:40:50.727 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(256MiB/5482msec); 0 zone resets 00:40:50.727 slat (usec): min=4, max=878, avg= 9.24, stdev= 8.47 00:40:50.727 clat (usec): min=645, max=61105, avg=10656.61, stdev=12904.46 00:40:50.727 lat (usec): min=653, max=61132, avg=10665.85, stdev=12904.45 00:40:50.727 clat percentiles (usec): 00:40:50.727 | 1.00th=[ 1123], 5.00th=[ 1369], 10.00th=[ 1532], 20.00th=[ 1713], 00:40:50.727 | 30.00th=[ 1893], 40.00th=[ 2278], 50.00th=[ 7111], 60.00th=[ 8225], 00:40:50.727 | 70.00th=[ 9241], 80.00th=[11469], 90.00th=[38011], 95.00th=[39584], 00:40:50.727 | 99.00th=[42730], 99.50th=[49021], 99.90th=[57410], 99.95th=[60031], 00:40:50.727 | 99.99th=[61080] 00:40:50.727 bw ( KiB/s): min=40440, max=63440, per=99.65%, avg=47652.91, stdev=8486.28, samples=11 00:40:50.727 iops : min=10110, max=15860, avg=11913.36, stdev=2121.42, samples=11 00:40:50.727 lat (usec) : 750=0.01%, 1000=0.11% 00:40:50.727 lat (msec) : 2=17.18%, 4=3.71%, 10=16.27%, 20=54.36%, 50=8.14% 00:40:50.727 lat (msec) : 100=0.23% 00:40:50.727 cpu : usr=98.88%, sys=0.38%, ctx=22, majf=0, minf=5565 00:40:50.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:40:50.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:50.727 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:50.727 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:50.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:50.727 00:40:50.727 Run status group 0 (all jobs): 00:40:50.727 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=255MiB (267MB), run=8575-8575msec 00:40:50.727 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=256MiB (268MB), run=5482-5482msec 00:40:53.263 ----------------------------------------------------- 00:40:53.263 Suppressions used: 00:40:53.263 count bytes template 00:40:53.263 1 5 /usr/src/fio/parse.c 00:40:53.263 2 192 /usr/src/fio/iolog.c 00:40:53.263 1 8 libtcmalloc_minimal.so 00:40:53.263 1 904 libcrypto.so 00:40:53.263 ----------------------------------------------------- 00:40:53.263 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:40:53.263 Remove shared memory files 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58107 /dev/shm/spdk_tgt_trace.pid74064 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:40:53.263 00:40:53.263 real 1m16.345s 00:40:53.263 user 2m47.745s 00:40:53.263 sys 0m4.571s 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:40:53.263 05:52:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:40:53.263 ************************************ 00:40:53.263 END TEST ftl_fio_basic 00:40:53.263 ************************************ 00:40:53.263 05:52:12 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:40:53.263 05:52:12 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:40:53.263 05:52:12 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:40:53.263 05:52:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:40:53.263 ************************************ 00:40:53.263 START TEST ftl_bdevperf 00:40:53.263 ************************************ 00:40:53.263 05:52:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:40:53.263 * Looking for test storage... 00:40:53.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:40:53.263 05:52:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:53.263 05:52:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:40:53.263 05:52:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:53.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.263 --rc genhtml_branch_coverage=1 00:40:53.263 --rc genhtml_function_coverage=1 00:40:53.263 --rc genhtml_legend=1 00:40:53.263 --rc geninfo_all_blocks=1 00:40:53.263 --rc geninfo_unexecuted_blocks=1 00:40:53.263 00:40:53.263 ' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:53.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.263 --rc genhtml_branch_coverage=1 00:40:53.263 --rc genhtml_function_coverage=1 00:40:53.263 --rc genhtml_legend=1 00:40:53.263 --rc geninfo_all_blocks=1 00:40:53.263 --rc geninfo_unexecuted_blocks=1 00:40:53.263 00:40:53.263 ' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:53.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.263 --rc genhtml_branch_coverage=1 00:40:53.263 --rc genhtml_function_coverage=1 00:40:53.263 --rc genhtml_legend=1 00:40:53.263 --rc geninfo_all_blocks=1 00:40:53.263 --rc geninfo_unexecuted_blocks=1 00:40:53.263 00:40:53.263 ' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:53.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:53.263 --rc genhtml_branch_coverage=1 00:40:53.263 --rc genhtml_function_coverage=1 00:40:53.263 --rc genhtml_legend=1 00:40:53.263 --rc geninfo_all_blocks=1 00:40:53.263 --rc geninfo_unexecuted_blocks=1 00:40:53.263 00:40:53.263 ' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:53.263 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76166 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76166 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 76166 ']' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:40:53.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:40:53.264 05:52:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:40:53.521 [2024-11-20 05:52:13.205621] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:40:53.522 [2024-11-20 05:52:13.205777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76166 ] 00:40:53.522 [2024-11-20 05:52:13.385820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:53.780 [2024-11-20 05:52:13.526439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:40:54.348 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:40:54.607 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:40:54.961 { 00:40:54.961 "name": "nvme0n1", 00:40:54.961 "aliases": [ 00:40:54.961 "6e11af04-2283-46b1-8deb-9de3fe55ed37" 00:40:54.961 ], 00:40:54.961 "product_name": "NVMe disk", 00:40:54.961 "block_size": 4096, 00:40:54.961 "num_blocks": 1310720, 00:40:54.961 "uuid": "6e11af04-2283-46b1-8deb-9de3fe55ed37", 00:40:54.961 "numa_id": -1, 00:40:54.961 "assigned_rate_limits": { 00:40:54.961 "rw_ios_per_sec": 0, 00:40:54.961 "rw_mbytes_per_sec": 0, 00:40:54.961 "r_mbytes_per_sec": 0, 00:40:54.961 "w_mbytes_per_sec": 0 00:40:54.961 }, 00:40:54.961 "claimed": true, 00:40:54.961 "claim_type": "read_many_write_one", 00:40:54.961 "zoned": false, 00:40:54.961 "supported_io_types": { 00:40:54.961 "read": true, 00:40:54.961 "write": true, 00:40:54.961 "unmap": true, 00:40:54.961 "flush": true, 00:40:54.961 "reset": true, 00:40:54.961 "nvme_admin": true, 00:40:54.961 "nvme_io": true, 00:40:54.961 "nvme_io_md": false, 00:40:54.961 "write_zeroes": true, 00:40:54.961 "zcopy": false, 00:40:54.961 "get_zone_info": false, 00:40:54.961 "zone_management": false, 00:40:54.961 "zone_append": false, 00:40:54.961 "compare": true, 00:40:54.961 "compare_and_write": false, 00:40:54.961 "abort": true, 00:40:54.961 "seek_hole": false, 00:40:54.961 "seek_data": false, 00:40:54.961 "copy": true, 00:40:54.961 "nvme_iov_md": false 00:40:54.961 }, 00:40:54.961 "driver_specific": { 00:40:54.961 "nvme": [ 00:40:54.961 { 00:40:54.961 "pci_address": "0000:00:11.0", 00:40:54.961 "trid": { 00:40:54.961 "trtype": "PCIe", 00:40:54.961 "traddr": "0000:00:11.0" 00:40:54.961 }, 00:40:54.961 "ctrlr_data": { 00:40:54.961 "cntlid": 0, 00:40:54.961 "vendor_id": "0x1b36", 00:40:54.961 "model_number": "QEMU NVMe Ctrl", 00:40:54.961 "serial_number": "12341", 00:40:54.961 "firmware_revision": "8.0.0", 00:40:54.961 "subnqn": "nqn.2019-08.org.qemu:12341", 00:40:54.961 "oacs": { 00:40:54.961 "security": 0, 00:40:54.961 "format": 1, 00:40:54.961 "firmware": 0, 00:40:54.961 "ns_manage": 1 00:40:54.961 }, 00:40:54.961 "multi_ctrlr": false, 00:40:54.961 "ana_reporting": false 00:40:54.961 }, 00:40:54.961 "vs": { 00:40:54.961 "nvme_version": "1.4" 00:40:54.961 }, 00:40:54.961 "ns_data": { 00:40:54.961 "id": 1, 00:40:54.961 "can_share": false 00:40:54.961 } 00:40:54.961 } 00:40:54.961 ], 00:40:54.961 "mp_policy": "active_passive" 00:40:54.961 } 00:40:54.961 } 00:40:54.961 ]' 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:40:54.961 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1293bda4-6beb-4415-8a84-2942011d2efd 00:40:54.962 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:40:54.962 05:52:14 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1293bda4-6beb-4415-8a84-2942011d2efd 00:40:55.224 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:40:55.482 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=37d4203c-07db-4320-928c-28df0f8d878e 00:40:55.482 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 37d4203c-07db-4320-928c-28df0f8d878e 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:40:55.741 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:40:56.000 { 00:40:56.000 "name": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:56.000 "aliases": [ 00:40:56.000 "lvs/nvme0n1p0" 00:40:56.000 ], 00:40:56.000 "product_name": "Logical Volume", 00:40:56.000 "block_size": 4096, 00:40:56.000 "num_blocks": 26476544, 00:40:56.000 "uuid": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:56.000 "assigned_rate_limits": { 00:40:56.000 "rw_ios_per_sec": 0, 00:40:56.000 "rw_mbytes_per_sec": 0, 00:40:56.000 "r_mbytes_per_sec": 0, 00:40:56.000 "w_mbytes_per_sec": 0 00:40:56.000 }, 00:40:56.000 "claimed": false, 00:40:56.000 "zoned": false, 00:40:56.000 "supported_io_types": { 00:40:56.000 "read": true, 00:40:56.000 "write": true, 00:40:56.000 "unmap": true, 00:40:56.000 "flush": false, 00:40:56.000 "reset": true, 00:40:56.000 "nvme_admin": false, 00:40:56.000 "nvme_io": false, 00:40:56.000 "nvme_io_md": false, 00:40:56.000 "write_zeroes": true, 00:40:56.000 "zcopy": false, 00:40:56.000 "get_zone_info": false, 00:40:56.000 "zone_management": false, 00:40:56.000 "zone_append": false, 00:40:56.000 "compare": false, 00:40:56.000 "compare_and_write": false, 00:40:56.000 "abort": false, 00:40:56.000 "seek_hole": true, 00:40:56.000 "seek_data": true, 00:40:56.000 "copy": false, 00:40:56.000 "nvme_iov_md": false 00:40:56.000 }, 00:40:56.000 "driver_specific": { 00:40:56.000 "lvol": { 00:40:56.000 "lvol_store_uuid": "37d4203c-07db-4320-928c-28df0f8d878e", 00:40:56.000 "base_bdev": "nvme0n1", 00:40:56.000 "thin_provision": true, 00:40:56.000 "num_allocated_clusters": 0, 00:40:56.000 "snapshot": false, 00:40:56.000 "clone": false, 00:40:56.000 "esnap_clone": false 00:40:56.000 } 00:40:56.000 } 00:40:56.000 } 00:40:56.000 ]' 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:40:56.000 05:52:15 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:40:56.260 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.519 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:40:56.520 { 00:40:56.520 "name": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:56.520 "aliases": [ 00:40:56.520 "lvs/nvme0n1p0" 00:40:56.520 ], 00:40:56.520 "product_name": "Logical Volume", 00:40:56.520 "block_size": 4096, 00:40:56.520 "num_blocks": 26476544, 00:40:56.520 "uuid": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:56.520 "assigned_rate_limits": { 00:40:56.520 "rw_ios_per_sec": 0, 00:40:56.520 "rw_mbytes_per_sec": 0, 00:40:56.520 "r_mbytes_per_sec": 0, 00:40:56.520 "w_mbytes_per_sec": 0 00:40:56.520 }, 00:40:56.520 "claimed": false, 00:40:56.520 "zoned": false, 00:40:56.520 "supported_io_types": { 00:40:56.520 "read": true, 00:40:56.520 "write": true, 00:40:56.520 "unmap": true, 00:40:56.520 "flush": false, 00:40:56.520 "reset": true, 00:40:56.520 "nvme_admin": false, 00:40:56.520 "nvme_io": false, 00:40:56.520 "nvme_io_md": false, 00:40:56.520 "write_zeroes": true, 00:40:56.520 "zcopy": false, 00:40:56.520 "get_zone_info": false, 00:40:56.520 "zone_management": false, 00:40:56.520 "zone_append": false, 00:40:56.520 "compare": false, 00:40:56.520 "compare_and_write": false, 00:40:56.520 "abort": false, 00:40:56.520 "seek_hole": true, 00:40:56.520 "seek_data": true, 00:40:56.520 "copy": false, 00:40:56.520 "nvme_iov_md": false 00:40:56.520 }, 00:40:56.520 "driver_specific": { 00:40:56.520 "lvol": { 00:40:56.520 "lvol_store_uuid": "37d4203c-07db-4320-928c-28df0f8d878e", 00:40:56.520 "base_bdev": "nvme0n1", 00:40:56.520 "thin_provision": true, 00:40:56.520 "num_allocated_clusters": 0, 00:40:56.520 "snapshot": false, 00:40:56.520 "clone": false, 00:40:56.520 "esnap_clone": false 00:40:56.520 } 00:40:56.520 } 00:40:56.520 } 00:40:56.520 ]' 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:40:56.520 05:52:16 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:40:56.810 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 913c8d20-f042-44c4-be37-989ce3cc7abe 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:40:57.071 { 00:40:57.071 "name": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:57.071 "aliases": [ 00:40:57.071 "lvs/nvme0n1p0" 00:40:57.071 ], 00:40:57.071 "product_name": "Logical Volume", 00:40:57.071 "block_size": 4096, 00:40:57.071 "num_blocks": 26476544, 00:40:57.071 "uuid": "913c8d20-f042-44c4-be37-989ce3cc7abe", 00:40:57.071 "assigned_rate_limits": { 00:40:57.071 "rw_ios_per_sec": 0, 00:40:57.071 "rw_mbytes_per_sec": 0, 00:40:57.071 "r_mbytes_per_sec": 0, 00:40:57.071 "w_mbytes_per_sec": 0 00:40:57.071 }, 00:40:57.071 "claimed": false, 00:40:57.071 "zoned": false, 00:40:57.071 "supported_io_types": { 00:40:57.071 "read": true, 00:40:57.071 "write": true, 00:40:57.071 "unmap": true, 00:40:57.071 "flush": false, 00:40:57.071 "reset": true, 00:40:57.071 "nvme_admin": false, 00:40:57.071 "nvme_io": false, 00:40:57.071 "nvme_io_md": false, 00:40:57.071 "write_zeroes": true, 00:40:57.071 "zcopy": false, 00:40:57.071 "get_zone_info": false, 00:40:57.071 "zone_management": false, 00:40:57.071 "zone_append": false, 00:40:57.071 "compare": false, 00:40:57.071 "compare_and_write": false, 00:40:57.071 "abort": false, 00:40:57.071 "seek_hole": true, 00:40:57.071 "seek_data": true, 00:40:57.071 "copy": false, 00:40:57.071 "nvme_iov_md": false 00:40:57.071 }, 00:40:57.071 "driver_specific": { 00:40:57.071 "lvol": { 00:40:57.071 "lvol_store_uuid": "37d4203c-07db-4320-928c-28df0f8d878e", 00:40:57.071 "base_bdev": "nvme0n1", 00:40:57.071 "thin_provision": true, 00:40:57.071 "num_allocated_clusters": 0, 00:40:57.071 "snapshot": false, 00:40:57.071 "clone": false, 00:40:57.071 "esnap_clone": false 00:40:57.071 } 00:40:57.071 } 00:40:57.071 } 00:40:57.071 ]' 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:40:57.071 05:52:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 913c8d20-f042-44c4-be37-989ce3cc7abe -c nvc0n1p0 --l2p_dram_limit 20 00:40:57.330 [2024-11-20 05:52:17.093966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.094030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:57.330 [2024-11-20 05:52:17.094047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:40:57.330 [2024-11-20 05:52:17.094058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.094131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.094147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:57.330 [2024-11-20 05:52:17.094155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:40:57.330 [2024-11-20 05:52:17.094166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.094184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:57.330 [2024-11-20 05:52:17.095394] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:57.330 [2024-11-20 05:52:17.095424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.095436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:57.330 [2024-11-20 05:52:17.095445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:40:57.330 [2024-11-20 05:52:17.095456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.095541] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b78b39a5-9187-47b5-afac-bd58a74f5c54 00:40:57.330 [2024-11-20 05:52:17.097996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.098035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:40:57.330 [2024-11-20 05:52:17.098067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:40:57.330 [2024-11-20 05:52:17.098079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.112022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.112059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:57.330 [2024-11-20 05:52:17.112073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.910 ms 00:40:57.330 [2024-11-20 05:52:17.112081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.112215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.112230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:57.330 [2024-11-20 05:52:17.112245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:40:57.330 [2024-11-20 05:52:17.112253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.112319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.112329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:57.330 [2024-11-20 05:52:17.112340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:57.330 [2024-11-20 05:52:17.112347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.112372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:57.330 [2024-11-20 05:52:17.118814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.118849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:57.330 [2024-11-20 05:52:17.118859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.465 ms 00:40:57.330 [2024-11-20 05:52:17.118874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.118904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.118917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:57.330 [2024-11-20 05:52:17.118925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:57.330 [2024-11-20 05:52:17.118935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.118964] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:40:57.330 [2024-11-20 05:52:17.119108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:57.330 [2024-11-20 05:52:17.119121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:57.330 [2024-11-20 05:52:17.119135] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:57.330 [2024-11-20 05:52:17.119144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:57.330 [2024-11-20 05:52:17.119156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:57.330 [2024-11-20 05:52:17.119163] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:57.330 [2024-11-20 05:52:17.119173] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:57.330 [2024-11-20 05:52:17.119180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:57.330 [2024-11-20 05:52:17.119190] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:57.330 [2024-11-20 05:52:17.119198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.119212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:57.330 [2024-11-20 05:52:17.119220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:40:57.330 [2024-11-20 05:52:17.119231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.119317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.330 [2024-11-20 05:52:17.119330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:57.330 [2024-11-20 05:52:17.119338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:40:57.330 [2024-11-20 05:52:17.119351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.330 [2024-11-20 05:52:17.119432] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:57.331 [2024-11-20 05:52:17.119450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:57.331 [2024-11-20 05:52:17.119462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:57.331 [2024-11-20 05:52:17.119493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:57.331 [2024-11-20 05:52:17.119517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:57.331 [2024-11-20 05:52:17.119534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:57.331 [2024-11-20 05:52:17.119555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:57.331 [2024-11-20 05:52:17.119562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:57.331 [2024-11-20 05:52:17.119587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:57.331 [2024-11-20 05:52:17.119595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:57.331 [2024-11-20 05:52:17.119608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:57.331 [2024-11-20 05:52:17.119624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:57.331 [2024-11-20 05:52:17.119648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:57.331 [2024-11-20 05:52:17.119673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:57.331 [2024-11-20 05:52:17.119694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:57.331 [2024-11-20 05:52:17.119719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:57.331 [2024-11-20 05:52:17.119745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:57.331 [2024-11-20 05:52:17.119760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:57.331 [2024-11-20 05:52:17.119768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:57.331 [2024-11-20 05:52:17.119775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:57.331 [2024-11-20 05:52:17.119783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:57.331 [2024-11-20 05:52:17.119789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:57.331 [2024-11-20 05:52:17.119798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:57.331 [2024-11-20 05:52:17.119824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:57.331 [2024-11-20 05:52:17.119832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119840] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:57.331 [2024-11-20 05:52:17.119849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:57.331 [2024-11-20 05:52:17.119858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:57.331 [2024-11-20 05:52:17.119884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:57.331 [2024-11-20 05:52:17.119891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:57.331 [2024-11-20 05:52:17.119901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:57.331 [2024-11-20 05:52:17.119908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:57.331 [2024-11-20 05:52:17.119917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:57.331 [2024-11-20 05:52:17.119924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:57.331 [2024-11-20 05:52:17.119938] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:57.331 [2024-11-20 05:52:17.119947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.119968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:57.331 [2024-11-20 05:52:17.119975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:57.331 [2024-11-20 05:52:17.119984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:57.331 [2024-11-20 05:52:17.119992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:57.331 [2024-11-20 05:52:17.120001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:57.331 [2024-11-20 05:52:17.120008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:57.331 [2024-11-20 05:52:17.120018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:57.331 [2024-11-20 05:52:17.120026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:57.331 [2024-11-20 05:52:17.120039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:57.331 [2024-11-20 05:52:17.120046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:57.331 [2024-11-20 05:52:17.120093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:57.331 [2024-11-20 05:52:17.120102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:57.331 [2024-11-20 05:52:17.120134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:57.331 [2024-11-20 05:52:17.120143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:57.331 [2024-11-20 05:52:17.120150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:57.331 [2024-11-20 05:52:17.120161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:57.331 [2024-11-20 05:52:17.120178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:57.331 [2024-11-20 05:52:17.120189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:40:57.331 [2024-11-20 05:52:17.120197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:57.331 [2024-11-20 05:52:17.120244] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:40:57.331 [2024-11-20 05:52:17.120254] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:41:00.619 [2024-11-20 05:52:20.393887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.393963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:41:00.619 [2024-11-20 05:52:20.393988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3279.950 ms 00:41:00.619 [2024-11-20 05:52:20.393997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.443132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.443208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:00.619 [2024-11-20 05:52:20.443225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.896 ms 00:41:00.619 [2024-11-20 05:52:20.443234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.443409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.443421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:00.619 [2024-11-20 05:52:20.443436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:41:00.619 [2024-11-20 05:52:20.443444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.512471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.512528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:00.619 [2024-11-20 05:52:20.512545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.090 ms 00:41:00.619 [2024-11-20 05:52:20.512569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.512621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.512634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:00.619 [2024-11-20 05:52:20.512645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:00.619 [2024-11-20 05:52:20.512654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.513503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.513524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:00.619 [2024-11-20 05:52:20.513536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:41:00.619 [2024-11-20 05:52:20.513543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.513659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.513676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:00.619 [2024-11-20 05:52:20.513690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:41:00.619 [2024-11-20 05:52:20.513698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.619 [2024-11-20 05:52:20.536340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.619 [2024-11-20 05:52:20.536387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:00.619 [2024-11-20 05:52:20.536418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.661 ms 00:41:00.619 [2024-11-20 05:52:20.536427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.550740] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:41:00.879 [2024-11-20 05:52:20.560280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.560322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:00.879 [2024-11-20 05:52:20.560352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.786 ms 00:41:00.879 [2024-11-20 05:52:20.560364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.652506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.652599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:41:00.879 [2024-11-20 05:52:20.652616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.261 ms 00:41:00.879 [2024-11-20 05:52:20.652627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.652846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.652864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:00.879 [2024-11-20 05:52:20.652873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:41:00.879 [2024-11-20 05:52:20.652884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.689463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.689518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:41:00.879 [2024-11-20 05:52:20.689531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.601 ms 00:41:00.879 [2024-11-20 05:52:20.689558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.724741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.724792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:41:00.879 [2024-11-20 05:52:20.724835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.213 ms 00:41:00.879 [2024-11-20 05:52:20.724846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.879 [2024-11-20 05:52:20.725628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.879 [2024-11-20 05:52:20.725659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:00.879 [2024-11-20 05:52:20.725669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:41:00.879 [2024-11-20 05:52:20.725680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.138 [2024-11-20 05:52:20.828476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.138 [2024-11-20 05:52:20.828556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:41:01.138 [2024-11-20 05:52:20.828588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.942 ms 00:41:01.138 [2024-11-20 05:52:20.828600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.138 [2024-11-20 05:52:20.866499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.138 [2024-11-20 05:52:20.866577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:41:01.138 [2024-11-20 05:52:20.866596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.893 ms 00:41:01.138 [2024-11-20 05:52:20.866607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.138 [2024-11-20 05:52:20.903759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.138 [2024-11-20 05:52:20.903849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:41:01.138 [2024-11-20 05:52:20.903864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.184 ms 00:41:01.138 [2024-11-20 05:52:20.903875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.138 [2024-11-20 05:52:20.941077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.138 [2024-11-20 05:52:20.941176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:01.138 [2024-11-20 05:52:20.941206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.232 ms 00:41:01.138 [2024-11-20 05:52:20.941217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.138 [2024-11-20 05:52:20.941261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.138 [2024-11-20 05:52:20.941277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:01.139 [2024-11-20 05:52:20.941294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:01.139 [2024-11-20 05:52:20.941304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.139 [2024-11-20 05:52:20.941411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:01.139 [2024-11-20 05:52:20.941428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:01.139 [2024-11-20 05:52:20.941436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:41:01.139 [2024-11-20 05:52:20.941446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:01.139 [2024-11-20 05:52:20.942968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3855.832 ms, result 0 00:41:01.139 { 00:41:01.139 "name": "ftl0", 00:41:01.139 "uuid": "b78b39a5-9187-47b5-afac-bd58a74f5c54" 00:41:01.139 } 00:41:01.139 05:52:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:41:01.139 05:52:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:41:01.139 05:52:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:41:01.398 05:52:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:41:01.398 [2024-11-20 05:52:21.310642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:41:01.657 I/O size of 69632 is greater than zero copy threshold (65536). 00:41:01.657 Zero copy mechanism will not be used. 00:41:01.657 Running I/O for 4 seconds... 00:41:03.531 1804.00 IOPS, 119.80 MiB/s [2024-11-20T05:52:24.387Z] 1793.50 IOPS, 119.10 MiB/s [2024-11-20T05:52:25.322Z] 1790.00 IOPS, 118.87 MiB/s [2024-11-20T05:52:25.322Z] 1792.75 IOPS, 119.05 MiB/s 00:41:05.403 Latency(us) 00:41:05.403 [2024-11-20T05:52:25.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.403 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:41:05.403 ftl0 : 4.00 1792.21 119.01 0.00 0.00 585.85 209.27 2303.78 00:41:05.403 [2024-11-20T05:52:25.322Z] =================================================================================================================== 00:41:05.403 [2024-11-20T05:52:25.322Z] Total : 1792.21 119.01 0.00 0.00 585.85 209.27 2303.78 00:41:05.403 [2024-11-20 05:52:25.315852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:41:05.661 { 00:41:05.661 "results": [ 00:41:05.661 { 00:41:05.661 "job": "ftl0", 00:41:05.661 "core_mask": "0x1", 00:41:05.661 "workload": "randwrite", 00:41:05.661 "status": "finished", 00:41:05.661 "queue_depth": 1, 00:41:05.661 "io_size": 69632, 00:41:05.661 "runtime": 4.00177, 00:41:05.661 "iops": 1792.2069484253218, 00:41:05.661 "mibps": 119.01374266886903, 00:41:05.661 "io_failed": 0, 00:41:05.661 "io_timeout": 0, 00:41:05.661 "avg_latency_us": 585.850367148323, 00:41:05.661 "min_latency_us": 209.271615720524, 00:41:05.661 "max_latency_us": 2303.776419213974 00:41:05.661 } 00:41:05.661 ], 00:41:05.661 "core_count": 1 00:41:05.661 } 00:41:05.661 05:52:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:41:05.661 [2024-11-20 05:52:25.458438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:41:05.661 Running I/O for 4 seconds... 00:41:07.975 10320.00 IOPS, 40.31 MiB/s [2024-11-20T05:52:28.832Z] 9999.50 IOPS, 39.06 MiB/s [2024-11-20T05:52:29.775Z] 9934.67 IOPS, 38.81 MiB/s [2024-11-20T05:52:29.775Z] 9921.75 IOPS, 38.76 MiB/s 00:41:09.856 Latency(us) 00:41:09.856 [2024-11-20T05:52:29.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.856 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.856 ftl0 : 4.02 9912.74 38.72 0.00 0.00 12884.53 291.55 22322.31 00:41:09.856 [2024-11-20T05:52:29.775Z] =================================================================================================================== 00:41:09.856 [2024-11-20T05:52:29.775Z] Total : 9912.74 38.72 0.00 0.00 12884.53 0.00 22322.31 00:41:09.856 [2024-11-20 05:52:29.478710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:41:09.856 { 00:41:09.856 "results": [ 00:41:09.856 { 00:41:09.856 "job": "ftl0", 00:41:09.856 "core_mask": "0x1", 00:41:09.856 "workload": "randwrite", 00:41:09.856 "status": "finished", 00:41:09.856 "queue_depth": 128, 00:41:09.856 "io_size": 4096, 00:41:09.856 "runtime": 4.01655, 00:41:09.856 "iops": 9912.7360545742, 00:41:09.856 "mibps": 38.72162521318047, 00:41:09.856 "io_failed": 0, 00:41:09.856 "io_timeout": 0, 00:41:09.856 "avg_latency_us": 12884.531933467395, 00:41:09.856 "min_latency_us": 291.54934497816595, 00:41:09.856 "max_latency_us": 22322.305676855896 00:41:09.856 } 00:41:09.856 ], 00:41:09.856 "core_count": 1 00:41:09.856 } 00:41:09.856 05:52:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:41:09.856 [2024-11-20 05:52:29.613100] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:41:09.856 Running I/O for 4 seconds... 00:41:11.732 7958.00 IOPS, 31.09 MiB/s [2024-11-20T05:52:33.033Z] 8014.00 IOPS, 31.30 MiB/s [2024-11-20T05:52:33.971Z] 8042.33 IOPS, 31.42 MiB/s [2024-11-20T05:52:33.971Z] 8047.75 IOPS, 31.44 MiB/s 00:41:14.052 Latency(us) 00:41:14.052 [2024-11-20T05:52:33.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.052 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:14.052 Verification LBA range: start 0x0 length 0x1400000 00:41:14.052 ftl0 : 4.01 8060.14 31.48 0.00 0.00 15830.02 280.82 18201.26 00:41:14.052 [2024-11-20T05:52:33.971Z] =================================================================================================================== 00:41:14.052 [2024-11-20T05:52:33.971Z] Total : 8060.14 31.48 0.00 0.00 15830.02 0.00 18201.26 00:41:14.052 [2024-11-20 05:52:33.636529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:41:14.052 { 00:41:14.052 "results": [ 00:41:14.052 { 00:41:14.052 "job": "ftl0", 00:41:14.052 "core_mask": "0x1", 00:41:14.052 "workload": "verify", 00:41:14.052 "status": "finished", 00:41:14.052 "verify_range": { 00:41:14.052 "start": 0, 00:41:14.052 "length": 20971520 00:41:14.052 }, 00:41:14.052 "queue_depth": 128, 00:41:14.052 "io_size": 4096, 00:41:14.052 "runtime": 4.009484, 00:41:14.052 "iops": 8060.13940946017, 00:41:14.052 "mibps": 31.48491956820379, 00:41:14.052 "io_failed": 0, 00:41:14.052 "io_timeout": 0, 00:41:14.052 "avg_latency_us": 15830.016859675972, 00:41:14.052 "min_latency_us": 280.8174672489083, 00:41:14.052 "max_latency_us": 18201.26462882096 00:41:14.052 } 00:41:14.052 ], 00:41:14.052 "core_count": 1 00:41:14.052 } 00:41:14.052 05:52:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:41:14.052 [2024-11-20 05:52:33.832703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.052 [2024-11-20 05:52:33.832768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:14.052 [2024-11-20 05:52:33.832784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:14.052 [2024-11-20 05:52:33.832795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.052 [2024-11-20 05:52:33.832840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:14.052 [2024-11-20 05:52:33.837753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.052 [2024-11-20 05:52:33.837825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:14.052 [2024-11-20 05:52:33.837842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.901 ms 00:41:14.052 [2024-11-20 05:52:33.837852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.052 [2024-11-20 05:52:33.839794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.052 [2024-11-20 05:52:33.839859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:14.052 [2024-11-20 05:52:33.839897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:41:14.052 [2024-11-20 05:52:33.839907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.057274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.057365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:14.313 [2024-11-20 05:52:34.057392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 217.745 ms 00:41:14.313 [2024-11-20 05:52:34.057403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.062570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.062603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:14.313 [2024-11-20 05:52:34.062616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.134 ms 00:41:14.313 [2024-11-20 05:52:34.062628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.098956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.099000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:14.313 [2024-11-20 05:52:34.099015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.346 ms 00:41:14.313 [2024-11-20 05:52:34.099023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.120503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.120546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:14.313 [2024-11-20 05:52:34.120562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.465 ms 00:41:14.313 [2024-11-20 05:52:34.120570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.120728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.120740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:14.313 [2024-11-20 05:52:34.120753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:41:14.313 [2024-11-20 05:52:34.120761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.155636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.155697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:14.313 [2024-11-20 05:52:34.155712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.921 ms 00:41:14.313 [2024-11-20 05:52:34.155720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.190290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.190332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:14.313 [2024-11-20 05:52:34.190363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.580 ms 00:41:14.313 [2024-11-20 05:52:34.190370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.313 [2024-11-20 05:52:34.225270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.313 [2024-11-20 05:52:34.225306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:14.313 [2024-11-20 05:52:34.225320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.926 ms 00:41:14.313 [2024-11-20 05:52:34.225327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.573 [2024-11-20 05:52:34.259892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.573 [2024-11-20 05:52:34.259951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:14.573 [2024-11-20 05:52:34.259972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.524 ms 00:41:14.573 [2024-11-20 05:52:34.259995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.573 [2024-11-20 05:52:34.260031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:14.573 [2024-11-20 05:52:34.260057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:14.573 [2024-11-20 05:52:34.260134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.260993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.261004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:14.574 [2024-11-20 05:52:34.261018] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:14.574 [2024-11-20 05:52:34.261029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b78b39a5-9187-47b5-afac-bd58a74f5c54 00:41:14.574 [2024-11-20 05:52:34.261040] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:14.574 [2024-11-20 05:52:34.261049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:14.575 [2024-11-20 05:52:34.261057] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:14.575 [2024-11-20 05:52:34.261067] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:14.575 [2024-11-20 05:52:34.261074] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:14.575 [2024-11-20 05:52:34.261085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:14.575 [2024-11-20 05:52:34.261092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:14.575 [2024-11-20 05:52:34.261104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:14.575 [2024-11-20 05:52:34.261110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:14.575 [2024-11-20 05:52:34.261120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.575 [2024-11-20 05:52:34.261128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:14.575 [2024-11-20 05:52:34.261139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:41:14.575 [2024-11-20 05:52:34.261146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.282388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.575 [2024-11-20 05:52:34.282429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:14.575 [2024-11-20 05:52:34.282443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.234 ms 00:41:14.575 [2024-11-20 05:52:34.282451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.283121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:14.575 [2024-11-20 05:52:34.283143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:14.575 [2024-11-20 05:52:34.283155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:41:14.575 [2024-11-20 05:52:34.283163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.341140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.575 [2024-11-20 05:52:34.341206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:14.575 [2024-11-20 05:52:34.341224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.575 [2024-11-20 05:52:34.341233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.341303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.575 [2024-11-20 05:52:34.341312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:14.575 [2024-11-20 05:52:34.341323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.575 [2024-11-20 05:52:34.341333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.341498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.575 [2024-11-20 05:52:34.341512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:14.575 [2024-11-20 05:52:34.341523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.575 [2024-11-20 05:52:34.341531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.341551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.575 [2024-11-20 05:52:34.341559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:14.575 [2024-11-20 05:52:34.341577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.575 [2024-11-20 05:52:34.341585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.575 [2024-11-20 05:52:34.476985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.575 [2024-11-20 05:52:34.477052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:14.575 [2024-11-20 05:52:34.477072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.575 [2024-11-20 05:52:34.477080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.585672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.585738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:14.835 [2024-11-20 05:52:34.585756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.585781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.585944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.585957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:14.835 [2024-11-20 05:52:34.585968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.585976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.586037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:14.835 [2024-11-20 05:52:34.586047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.586055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.586190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:14.835 [2024-11-20 05:52:34.586205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.586213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.586266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:14.835 [2024-11-20 05:52:34.586277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.586285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.586344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:14.835 [2024-11-20 05:52:34.586353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.586361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:14.835 [2024-11-20 05:52:34.586437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:14.835 [2024-11-20 05:52:34.586448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:14.835 [2024-11-20 05:52:34.586455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:14.835 [2024-11-20 05:52:34.586605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 755.312 ms, result 0 00:41:14.835 true 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76166 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 76166 ']' 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 76166 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76166 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:14.835 killing process with pid 76166 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76166' 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 76166 00:41:14.835 Received shutdown signal, test time was about 4.000000 seconds 00:41:14.835 00:41:14.835 Latency(us) 00:41:14.835 [2024-11-20T05:52:34.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:14.835 [2024-11-20T05:52:34.754Z] =================================================================================================================== 00:41:14.835 [2024-11-20T05:52:34.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:14.835 05:52:34 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 76166 00:41:20.099 Remove shared memory files 00:41:20.099 05:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:20.099 05:52:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:41:20.099 05:52:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:41:20.099 05:52:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:41:20.099 ************************************ 00:41:20.099 END TEST ftl_bdevperf 00:41:20.099 ************************************ 00:41:20.099 00:41:20.099 real 0m27.170s 00:41:20.099 user 0m29.672s 00:41:20.099 sys 0m1.435s 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:41:20.099 05:52:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:41:20.358 05:52:40 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:41:20.358 05:52:40 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:41:20.358 05:52:40 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:41:20.358 05:52:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:41:20.358 ************************************ 00:41:20.358 START TEST ftl_trim 00:41:20.358 ************************************ 00:41:20.358 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:41:20.358 * Looking for test storage... 00:41:20.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:41:20.358 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:20.358 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:41:20.358 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:20.617 05:52:40 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:20.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.617 --rc genhtml_branch_coverage=1 00:41:20.617 --rc genhtml_function_coverage=1 00:41:20.617 --rc genhtml_legend=1 00:41:20.617 --rc geninfo_all_blocks=1 00:41:20.617 --rc geninfo_unexecuted_blocks=1 00:41:20.617 00:41:20.617 ' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:20.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.617 --rc genhtml_branch_coverage=1 00:41:20.617 --rc genhtml_function_coverage=1 00:41:20.617 --rc genhtml_legend=1 00:41:20.617 --rc geninfo_all_blocks=1 00:41:20.617 --rc geninfo_unexecuted_blocks=1 00:41:20.617 00:41:20.617 ' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:20.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.617 --rc genhtml_branch_coverage=1 00:41:20.617 --rc genhtml_function_coverage=1 00:41:20.617 --rc genhtml_legend=1 00:41:20.617 --rc geninfo_all_blocks=1 00:41:20.617 --rc geninfo_unexecuted_blocks=1 00:41:20.617 00:41:20.617 ' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:20.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:20.617 --rc genhtml_branch_coverage=1 00:41:20.617 --rc genhtml_function_coverage=1 00:41:20.617 --rc genhtml_legend=1 00:41:20.617 --rc geninfo_all_blocks=1 00:41:20.617 --rc geninfo_unexecuted_blocks=1 00:41:20.617 00:41:20.617 ' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76561 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:41:20.617 05:52:40 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76561 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76561 ']' 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:20.617 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:20.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:20.618 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:20.618 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:20.618 05:52:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:41:20.618 [2024-11-20 05:52:40.439042] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:20.618 [2024-11-20 05:52:40.439190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76561 ] 00:41:20.876 [2024-11-20 05:52:40.622408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:20.876 [2024-11-20 05:52:40.771165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.876 [2024-11-20 05:52:40.771308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.876 [2024-11-20 05:52:40.771349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:22.254 05:52:41 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:22.254 05:52:41 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:41:22.254 05:52:41 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:41:22.254 05:52:42 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:41:22.254 05:52:42 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:41:22.254 05:52:42 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:41:22.254 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:41:22.254 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:22.254 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:41:22.254 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:41:22.254 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:41:22.513 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:22.513 { 00:41:22.513 "name": "nvme0n1", 00:41:22.513 "aliases": [ 00:41:22.513 "e7e39632-2c45-4987-b052-eab1a1c6f513" 00:41:22.513 ], 00:41:22.513 "product_name": "NVMe disk", 00:41:22.513 "block_size": 4096, 00:41:22.513 "num_blocks": 1310720, 00:41:22.513 "uuid": "e7e39632-2c45-4987-b052-eab1a1c6f513", 00:41:22.513 "numa_id": -1, 00:41:22.513 "assigned_rate_limits": { 00:41:22.513 "rw_ios_per_sec": 0, 00:41:22.513 "rw_mbytes_per_sec": 0, 00:41:22.513 "r_mbytes_per_sec": 0, 00:41:22.513 "w_mbytes_per_sec": 0 00:41:22.513 }, 00:41:22.513 "claimed": true, 00:41:22.513 "claim_type": "read_many_write_one", 00:41:22.513 "zoned": false, 00:41:22.513 "supported_io_types": { 00:41:22.513 "read": true, 00:41:22.513 "write": true, 00:41:22.513 "unmap": true, 00:41:22.513 "flush": true, 00:41:22.513 "reset": true, 00:41:22.513 "nvme_admin": true, 00:41:22.513 "nvme_io": true, 00:41:22.513 "nvme_io_md": false, 00:41:22.513 "write_zeroes": true, 00:41:22.513 "zcopy": false, 00:41:22.513 "get_zone_info": false, 00:41:22.513 "zone_management": false, 00:41:22.513 "zone_append": false, 00:41:22.513 "compare": true, 00:41:22.513 "compare_and_write": false, 00:41:22.513 "abort": true, 00:41:22.513 "seek_hole": false, 00:41:22.513 "seek_data": false, 00:41:22.513 "copy": true, 00:41:22.513 "nvme_iov_md": false 00:41:22.513 }, 00:41:22.513 "driver_specific": { 00:41:22.513 "nvme": [ 00:41:22.513 { 00:41:22.513 "pci_address": "0000:00:11.0", 00:41:22.513 "trid": { 00:41:22.513 "trtype": "PCIe", 00:41:22.513 "traddr": "0000:00:11.0" 00:41:22.513 }, 00:41:22.513 "ctrlr_data": { 00:41:22.513 "cntlid": 0, 00:41:22.513 "vendor_id": "0x1b36", 00:41:22.513 "model_number": "QEMU NVMe Ctrl", 00:41:22.513 "serial_number": "12341", 00:41:22.513 "firmware_revision": "8.0.0", 00:41:22.513 "subnqn": "nqn.2019-08.org.qemu:12341", 00:41:22.513 "oacs": { 00:41:22.513 "security": 0, 00:41:22.513 "format": 1, 00:41:22.513 "firmware": 0, 00:41:22.513 "ns_manage": 1 00:41:22.513 }, 00:41:22.513 "multi_ctrlr": false, 00:41:22.513 "ana_reporting": false 00:41:22.513 }, 00:41:22.513 "vs": { 00:41:22.513 "nvme_version": "1.4" 00:41:22.513 }, 00:41:22.513 "ns_data": { 00:41:22.513 "id": 1, 00:41:22.513 "can_share": false 00:41:22.513 } 00:41:22.513 } 00:41:22.513 ], 00:41:22.513 "mp_policy": "active_passive" 00:41:22.513 } 00:41:22.513 } 00:41:22.513 ]' 00:41:22.513 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:22.513 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:41:22.513 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:22.773 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:41:22.773 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:41:22.773 05:52:42 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=37d4203c-07db-4320-928c-28df0f8d878e 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:41:22.773 05:52:42 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37d4203c-07db-4320-928c-28df0f8d878e 00:41:23.032 05:52:42 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:41:23.290 05:52:43 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=775de4cb-4451-47d2-920c-4058f80b07c3 00:41:23.290 05:52:43 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 775de4cb-4451-47d2-920c-4058f80b07c3 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:41:23.549 05:52:43 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.549 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.549 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:23.549 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:41:23.549 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:41:23.549 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 449463d3-1ca7-494a-aae3-551f25df488f 00:41:23.808 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:23.809 { 00:41:23.809 "name": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:23.809 "aliases": [ 00:41:23.809 "lvs/nvme0n1p0" 00:41:23.809 ], 00:41:23.809 "product_name": "Logical Volume", 00:41:23.809 "block_size": 4096, 00:41:23.809 "num_blocks": 26476544, 00:41:23.809 "uuid": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:23.809 "assigned_rate_limits": { 00:41:23.809 "rw_ios_per_sec": 0, 00:41:23.809 "rw_mbytes_per_sec": 0, 00:41:23.809 "r_mbytes_per_sec": 0, 00:41:23.809 "w_mbytes_per_sec": 0 00:41:23.809 }, 00:41:23.809 "claimed": false, 00:41:23.809 "zoned": false, 00:41:23.809 "supported_io_types": { 00:41:23.809 "read": true, 00:41:23.809 "write": true, 00:41:23.809 "unmap": true, 00:41:23.809 "flush": false, 00:41:23.809 "reset": true, 00:41:23.809 "nvme_admin": false, 00:41:23.809 "nvme_io": false, 00:41:23.809 "nvme_io_md": false, 00:41:23.809 "write_zeroes": true, 00:41:23.809 "zcopy": false, 00:41:23.809 "get_zone_info": false, 00:41:23.809 "zone_management": false, 00:41:23.809 "zone_append": false, 00:41:23.809 "compare": false, 00:41:23.809 "compare_and_write": false, 00:41:23.809 "abort": false, 00:41:23.809 "seek_hole": true, 00:41:23.809 "seek_data": true, 00:41:23.809 "copy": false, 00:41:23.809 "nvme_iov_md": false 00:41:23.809 }, 00:41:23.809 "driver_specific": { 00:41:23.809 "lvol": { 00:41:23.809 "lvol_store_uuid": "775de4cb-4451-47d2-920c-4058f80b07c3", 00:41:23.809 "base_bdev": "nvme0n1", 00:41:23.809 "thin_provision": true, 00:41:23.809 "num_allocated_clusters": 0, 00:41:23.809 "snapshot": false, 00:41:23.809 "clone": false, 00:41:23.809 "esnap_clone": false 00:41:23.809 } 00:41:23.809 } 00:41:23.809 } 00:41:23.809 ]' 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:23.809 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:41:23.809 05:52:43 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:41:23.809 05:52:43 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:41:23.809 05:52:43 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:41:24.068 05:52:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:41:24.068 05:52:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:41:24.068 05:52:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.068 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.068 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:24.068 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:41:24.068 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:41:24.068 05:52:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:24.327 { 00:41:24.327 "name": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:24.327 "aliases": [ 00:41:24.327 "lvs/nvme0n1p0" 00:41:24.327 ], 00:41:24.327 "product_name": "Logical Volume", 00:41:24.327 "block_size": 4096, 00:41:24.327 "num_blocks": 26476544, 00:41:24.327 "uuid": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:24.327 "assigned_rate_limits": { 00:41:24.327 "rw_ios_per_sec": 0, 00:41:24.327 "rw_mbytes_per_sec": 0, 00:41:24.327 "r_mbytes_per_sec": 0, 00:41:24.327 "w_mbytes_per_sec": 0 00:41:24.327 }, 00:41:24.327 "claimed": false, 00:41:24.327 "zoned": false, 00:41:24.327 "supported_io_types": { 00:41:24.327 "read": true, 00:41:24.327 "write": true, 00:41:24.327 "unmap": true, 00:41:24.327 "flush": false, 00:41:24.327 "reset": true, 00:41:24.327 "nvme_admin": false, 00:41:24.327 "nvme_io": false, 00:41:24.327 "nvme_io_md": false, 00:41:24.327 "write_zeroes": true, 00:41:24.327 "zcopy": false, 00:41:24.327 "get_zone_info": false, 00:41:24.327 "zone_management": false, 00:41:24.327 "zone_append": false, 00:41:24.327 "compare": false, 00:41:24.327 "compare_and_write": false, 00:41:24.327 "abort": false, 00:41:24.327 "seek_hole": true, 00:41:24.327 "seek_data": true, 00:41:24.327 "copy": false, 00:41:24.327 "nvme_iov_md": false 00:41:24.327 }, 00:41:24.327 "driver_specific": { 00:41:24.327 "lvol": { 00:41:24.327 "lvol_store_uuid": "775de4cb-4451-47d2-920c-4058f80b07c3", 00:41:24.327 "base_bdev": "nvme0n1", 00:41:24.327 "thin_provision": true, 00:41:24.327 "num_allocated_clusters": 0, 00:41:24.327 "snapshot": false, 00:41:24.327 "clone": false, 00:41:24.327 "esnap_clone": false 00:41:24.327 } 00:41:24.327 } 00:41:24.327 } 00:41:24.327 ]' 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:24.327 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:41:24.327 05:52:44 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:41:24.327 05:52:44 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:41:24.587 05:52:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:41:24.587 05:52:44 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:41:24.587 05:52:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.587 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.587 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:41:24.587 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:41:24.587 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:41:24.587 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 449463d3-1ca7-494a-aae3-551f25df488f 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:41:24.847 { 00:41:24.847 "name": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:24.847 "aliases": [ 00:41:24.847 "lvs/nvme0n1p0" 00:41:24.847 ], 00:41:24.847 "product_name": "Logical Volume", 00:41:24.847 "block_size": 4096, 00:41:24.847 "num_blocks": 26476544, 00:41:24.847 "uuid": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:24.847 "assigned_rate_limits": { 00:41:24.847 "rw_ios_per_sec": 0, 00:41:24.847 "rw_mbytes_per_sec": 0, 00:41:24.847 "r_mbytes_per_sec": 0, 00:41:24.847 "w_mbytes_per_sec": 0 00:41:24.847 }, 00:41:24.847 "claimed": false, 00:41:24.847 "zoned": false, 00:41:24.847 "supported_io_types": { 00:41:24.847 "read": true, 00:41:24.847 "write": true, 00:41:24.847 "unmap": true, 00:41:24.847 "flush": false, 00:41:24.847 "reset": true, 00:41:24.847 "nvme_admin": false, 00:41:24.847 "nvme_io": false, 00:41:24.847 "nvme_io_md": false, 00:41:24.847 "write_zeroes": true, 00:41:24.847 "zcopy": false, 00:41:24.847 "get_zone_info": false, 00:41:24.847 "zone_management": false, 00:41:24.847 "zone_append": false, 00:41:24.847 "compare": false, 00:41:24.847 "compare_and_write": false, 00:41:24.847 "abort": false, 00:41:24.847 "seek_hole": true, 00:41:24.847 "seek_data": true, 00:41:24.847 "copy": false, 00:41:24.847 "nvme_iov_md": false 00:41:24.847 }, 00:41:24.847 "driver_specific": { 00:41:24.847 "lvol": { 00:41:24.847 "lvol_store_uuid": "775de4cb-4451-47d2-920c-4058f80b07c3", 00:41:24.847 "base_bdev": "nvme0n1", 00:41:24.847 "thin_provision": true, 00:41:24.847 "num_allocated_clusters": 0, 00:41:24.847 "snapshot": false, 00:41:24.847 "clone": false, 00:41:24.847 "esnap_clone": false 00:41:24.847 } 00:41:24.847 } 00:41:24.847 } 00:41:24.847 ]' 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:41:24.847 05:52:44 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:41:24.847 05:52:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:41:24.847 05:52:44 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 449463d3-1ca7-494a-aae3-551f25df488f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:41:25.107 [2024-11-20 05:52:44.932466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.932539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:25.107 [2024-11-20 05:52:44.932573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:25.107 [2024-11-20 05:52:44.932583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.936329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.936372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:25.107 [2024-11-20 05:52:44.936402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.718 ms 00:41:25.107 [2024-11-20 05:52:44.936412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.936550] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:25.107 [2024-11-20 05:52:44.937737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:25.107 [2024-11-20 05:52:44.937777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.937788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:25.107 [2024-11-20 05:52:44.937799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:41:25.107 [2024-11-20 05:52:44.937818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.937935] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2028fbc6-7764-4261-8bfa-c9609e66672d 00:41:25.107 [2024-11-20 05:52:44.940395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.940433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:41:25.107 [2024-11-20 05:52:44.940445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:41:25.107 [2024-11-20 05:52:44.940457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.954831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.954908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:25.107 [2024-11-20 05:52:44.954932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.289 ms 00:41:25.107 [2024-11-20 05:52:44.954944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.955136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.955156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:25.107 [2024-11-20 05:52:44.955166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:41:25.107 [2024-11-20 05:52:44.955182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.955226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.955243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:25.107 [2024-11-20 05:52:44.955253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:25.107 [2024-11-20 05:52:44.955269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.955311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:41:25.107 [2024-11-20 05:52:44.961668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.961735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:25.107 [2024-11-20 05:52:44.961751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.378 ms 00:41:25.107 [2024-11-20 05:52:44.961759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.961843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.961854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:25.107 [2024-11-20 05:52:44.961867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:25.107 [2024-11-20 05:52:44.961892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.961934] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:41:25.107 [2024-11-20 05:52:44.962095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:25.107 [2024-11-20 05:52:44.962123] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:25.107 [2024-11-20 05:52:44.962136] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:25.107 [2024-11-20 05:52:44.962152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:41:25.107 [2024-11-20 05:52:44.962184] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:25.107 [2024-11-20 05:52:44.962195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:25.107 [2024-11-20 05:52:44.962207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:25.107 [2024-11-20 05:52:44.962219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.962228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:25.107 [2024-11-20 05:52:44.962241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:41:25.107 [2024-11-20 05:52:44.962250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.962348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.107 [2024-11-20 05:52:44.962361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:25.107 [2024-11-20 05:52:44.962373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:41:25.107 [2024-11-20 05:52:44.962381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.107 [2024-11-20 05:52:44.962519] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:25.107 [2024-11-20 05:52:44.962537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:25.107 [2024-11-20 05:52:44.962549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:25.107 [2024-11-20 05:52:44.962578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:25.107 [2024-11-20 05:52:44.962606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:25.107 [2024-11-20 05:52:44.962625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:25.107 [2024-11-20 05:52:44.962632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:41:25.107 [2024-11-20 05:52:44.962641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:25.107 [2024-11-20 05:52:44.962649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:25.107 [2024-11-20 05:52:44.962660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:41:25.107 [2024-11-20 05:52:44.962669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:25.107 [2024-11-20 05:52:44.962690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:25.107 [2024-11-20 05:52:44.962719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:25.107 [2024-11-20 05:52:44.962744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:25.107 [2024-11-20 05:52:44.962770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:25.107 [2024-11-20 05:52:44.962796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:25.107 [2024-11-20 05:52:44.962825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:25.107 [2024-11-20 05:52:44.962838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:41:25.107 [2024-11-20 05:52:44.962846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:25.107 [2024-11-20 05:52:44.962856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:25.107 [2024-11-20 05:52:44.962863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:41:25.107 [2024-11-20 05:52:44.962874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:25.107 [2024-11-20 05:52:44.962881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:25.107 [2024-11-20 05:52:44.962891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:41:25.107 [2024-11-20 05:52:44.962899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.108 [2024-11-20 05:52:44.962909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:25.108 [2024-11-20 05:52:44.962917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:41:25.108 [2024-11-20 05:52:44.962927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.108 [2024-11-20 05:52:44.962934] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:25.108 [2024-11-20 05:52:44.962946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:25.108 [2024-11-20 05:52:44.962954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:25.108 [2024-11-20 05:52:44.962968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:25.108 [2024-11-20 05:52:44.962978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:25.108 [2024-11-20 05:52:44.962992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:25.108 [2024-11-20 05:52:44.962999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:25.108 [2024-11-20 05:52:44.963009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:25.108 [2024-11-20 05:52:44.963017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:25.108 [2024-11-20 05:52:44.963028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:25.108 [2024-11-20 05:52:44.963041] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:25.108 [2024-11-20 05:52:44.963055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:41:25.108 [2024-11-20 05:52:44.963079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:41:25.108 [2024-11-20 05:52:44.963088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:41:25.108 [2024-11-20 05:52:44.963099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:41:25.108 [2024-11-20 05:52:44.963107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:41:25.108 [2024-11-20 05:52:44.963118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:41:25.108 [2024-11-20 05:52:44.963126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:41:25.108 [2024-11-20 05:52:44.963136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:41:25.108 [2024-11-20 05:52:44.963144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:41:25.108 [2024-11-20 05:52:44.963158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:41:25.108 [2024-11-20 05:52:44.963208] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:25.108 [2024-11-20 05:52:44.963224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:25.108 [2024-11-20 05:52:44.963246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:25.108 [2024-11-20 05:52:44.963255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:25.108 [2024-11-20 05:52:44.963265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:25.108 [2024-11-20 05:52:44.963274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:25.108 [2024-11-20 05:52:44.963285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:25.108 [2024-11-20 05:52:44.963294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:41:25.108 [2024-11-20 05:52:44.963306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:25.108 [2024-11-20 05:52:44.963393] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:41:25.108 [2024-11-20 05:52:44.963414] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:41:28.395 [2024-11-20 05:52:48.167289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.167384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:41:28.395 [2024-11-20 05:52:48.167417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3210.073 ms 00:41:28.395 [2024-11-20 05:52:48.167436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.218689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.218774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:28.395 [2024-11-20 05:52:48.218790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.925 ms 00:41:28.395 [2024-11-20 05:52:48.218810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.219042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.219064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:28.395 [2024-11-20 05:52:48.219073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:41:28.395 [2024-11-20 05:52:48.219088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.285575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.285680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:28.395 [2024-11-20 05:52:48.285697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.518 ms 00:41:28.395 [2024-11-20 05:52:48.285709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.285854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.285870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:28.395 [2024-11-20 05:52:48.285879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:28.395 [2024-11-20 05:52:48.285890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.286701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.286730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:28.395 [2024-11-20 05:52:48.286740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:41:28.395 [2024-11-20 05:52:48.286751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.395 [2024-11-20 05:52:48.286905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.395 [2024-11-20 05:52:48.286926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:28.395 [2024-11-20 05:52:48.286936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:41:28.395 [2024-11-20 05:52:48.286951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.315488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.315562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:28.655 [2024-11-20 05:52:48.315596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.528 ms 00:41:28.655 [2024-11-20 05:52:48.315608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.331991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:41:28.655 [2024-11-20 05:52:48.359359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.359463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:28.655 [2024-11-20 05:52:48.359482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.643 ms 00:41:28.655 [2024-11-20 05:52:48.359491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.462135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.462241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:41:28.655 [2024-11-20 05:52:48.462262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.676 ms 00:41:28.655 [2024-11-20 05:52:48.462272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.462548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.462576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:28.655 [2024-11-20 05:52:48.462593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:41:28.655 [2024-11-20 05:52:48.462601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.499848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.499930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:41:28.655 [2024-11-20 05:52:48.499949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.274 ms 00:41:28.655 [2024-11-20 05:52:48.499957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.535012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.535062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:41:28.655 [2024-11-20 05:52:48.535078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.040 ms 00:41:28.655 [2024-11-20 05:52:48.535086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.655 [2024-11-20 05:52:48.535890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.655 [2024-11-20 05:52:48.535918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:28.655 [2024-11-20 05:52:48.535931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:41:28.655 [2024-11-20 05:52:48.535939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.640034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.640117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:41:28.916 [2024-11-20 05:52:48.640157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.246 ms 00:41:28.916 [2024-11-20 05:52:48.640166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.680948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.681012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:41:28.916 [2024-11-20 05:52:48.681030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.741 ms 00:41:28.916 [2024-11-20 05:52:48.681039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.722402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.722469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:41:28.916 [2024-11-20 05:52:48.722487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.347 ms 00:41:28.916 [2024-11-20 05:52:48.722496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.761258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.761325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:28.916 [2024-11-20 05:52:48.761343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.716 ms 00:41:28.916 [2024-11-20 05:52:48.761388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.761511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.761527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:28.916 [2024-11-20 05:52:48.761543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:28.916 [2024-11-20 05:52:48.761553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.761657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:28.916 [2024-11-20 05:52:48.761669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:28.916 [2024-11-20 05:52:48.761681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:41:28.916 [2024-11-20 05:52:48.761690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:28.916 [2024-11-20 05:52:48.763145] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:28.916 [2024-11-20 05:52:48.768234] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3837.682 ms, result 0 00:41:28.916 [2024-11-20 05:52:48.769149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:28.916 { 00:41:28.916 "name": "ftl0", 00:41:28.916 "uuid": "2028fbc6-7764-4261-8bfa-c9609e66672d" 00:41:28.916 } 00:41:28.916 05:52:48 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:41:28.916 05:52:48 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:29.175 05:52:49 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:41:29.435 [ 00:41:29.435 { 00:41:29.435 "name": "ftl0", 00:41:29.435 "aliases": [ 00:41:29.435 "2028fbc6-7764-4261-8bfa-c9609e66672d" 00:41:29.435 ], 00:41:29.435 "product_name": "FTL disk", 00:41:29.435 "block_size": 4096, 00:41:29.435 "num_blocks": 23592960, 00:41:29.435 "uuid": "2028fbc6-7764-4261-8bfa-c9609e66672d", 00:41:29.435 "assigned_rate_limits": { 00:41:29.435 "rw_ios_per_sec": 0, 00:41:29.435 "rw_mbytes_per_sec": 0, 00:41:29.435 "r_mbytes_per_sec": 0, 00:41:29.435 "w_mbytes_per_sec": 0 00:41:29.435 }, 00:41:29.435 "claimed": false, 00:41:29.435 "zoned": false, 00:41:29.435 "supported_io_types": { 00:41:29.435 "read": true, 00:41:29.435 "write": true, 00:41:29.435 "unmap": true, 00:41:29.435 "flush": true, 00:41:29.435 "reset": false, 00:41:29.435 "nvme_admin": false, 00:41:29.435 "nvme_io": false, 00:41:29.435 "nvme_io_md": false, 00:41:29.435 "write_zeroes": true, 00:41:29.435 "zcopy": false, 00:41:29.435 "get_zone_info": false, 00:41:29.435 "zone_management": false, 00:41:29.435 "zone_append": false, 00:41:29.435 "compare": false, 00:41:29.435 "compare_and_write": false, 00:41:29.435 "abort": false, 00:41:29.435 "seek_hole": false, 00:41:29.435 "seek_data": false, 00:41:29.435 "copy": false, 00:41:29.435 "nvme_iov_md": false 00:41:29.435 }, 00:41:29.435 "driver_specific": { 00:41:29.435 "ftl": { 00:41:29.435 "base_bdev": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:29.435 "cache": "nvc0n1p0" 00:41:29.435 } 00:41:29.435 } 00:41:29.435 } 00:41:29.435 ] 00:41:29.435 05:52:49 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:41:29.435 05:52:49 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:41:29.435 05:52:49 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:41:29.695 05:52:49 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:41:29.695 05:52:49 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:41:29.954 05:52:49 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:41:29.954 { 00:41:29.954 "name": "ftl0", 00:41:29.954 "aliases": [ 00:41:29.954 "2028fbc6-7764-4261-8bfa-c9609e66672d" 00:41:29.954 ], 00:41:29.954 "product_name": "FTL disk", 00:41:29.954 "block_size": 4096, 00:41:29.954 "num_blocks": 23592960, 00:41:29.954 "uuid": "2028fbc6-7764-4261-8bfa-c9609e66672d", 00:41:29.954 "assigned_rate_limits": { 00:41:29.954 "rw_ios_per_sec": 0, 00:41:29.954 "rw_mbytes_per_sec": 0, 00:41:29.954 "r_mbytes_per_sec": 0, 00:41:29.954 "w_mbytes_per_sec": 0 00:41:29.954 }, 00:41:29.954 "claimed": false, 00:41:29.954 "zoned": false, 00:41:29.954 "supported_io_types": { 00:41:29.954 "read": true, 00:41:29.954 "write": true, 00:41:29.954 "unmap": true, 00:41:29.954 "flush": true, 00:41:29.954 "reset": false, 00:41:29.954 "nvme_admin": false, 00:41:29.954 "nvme_io": false, 00:41:29.954 "nvme_io_md": false, 00:41:29.954 "write_zeroes": true, 00:41:29.954 "zcopy": false, 00:41:29.954 "get_zone_info": false, 00:41:29.954 "zone_management": false, 00:41:29.954 "zone_append": false, 00:41:29.954 "compare": false, 00:41:29.954 "compare_and_write": false, 00:41:29.954 "abort": false, 00:41:29.954 "seek_hole": false, 00:41:29.954 "seek_data": false, 00:41:29.954 "copy": false, 00:41:29.954 "nvme_iov_md": false 00:41:29.954 }, 00:41:29.954 "driver_specific": { 00:41:29.954 "ftl": { 00:41:29.954 "base_bdev": "449463d3-1ca7-494a-aae3-551f25df488f", 00:41:29.954 "cache": "nvc0n1p0" 00:41:29.954 } 00:41:29.954 } 00:41:29.954 } 00:41:29.954 ]' 00:41:29.954 05:52:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:41:29.954 05:52:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:41:29.954 05:52:49 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:41:30.214 [2024-11-20 05:52:49.961956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:49.962014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:30.214 [2024-11-20 05:52:49.962038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:30.214 [2024-11-20 05:52:49.962053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:49.962090] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:41:30.214 [2024-11-20 05:52:49.966914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:49.966946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:30.214 [2024-11-20 05:52:49.966966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:41:30.214 [2024-11-20 05:52:49.966974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:49.967559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:49.967581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:30.214 [2024-11-20 05:52:49.967594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:41:30.214 [2024-11-20 05:52:49.967603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:49.970400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:49.970422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:30.214 [2024-11-20 05:52:49.970435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:41:30.214 [2024-11-20 05:52:49.970443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:49.975935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:49.975964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:30.214 [2024-11-20 05:52:49.975976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.469 ms 00:41:30.214 [2024-11-20 05:52:49.975999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:50.016794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:50.016864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:30.214 [2024-11-20 05:52:50.016889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.773 ms 00:41:30.214 [2024-11-20 05:52:50.016899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:50.042900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:50.042981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:30.214 [2024-11-20 05:52:50.043003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.916 ms 00:41:30.214 [2024-11-20 05:52:50.043018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:50.043278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:50.043290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:30.214 [2024-11-20 05:52:50.043301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:41:30.214 [2024-11-20 05:52:50.043310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.214 [2024-11-20 05:52:50.081614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.214 [2024-11-20 05:52:50.081665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:30.214 [2024-11-20 05:52:50.081682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.338 ms 00:41:30.214 [2024-11-20 05:52:50.081689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.215 [2024-11-20 05:52:50.118279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.215 [2024-11-20 05:52:50.118327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:30.215 [2024-11-20 05:52:50.118349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:41:30.215 [2024-11-20 05:52:50.118357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.500 [2024-11-20 05:52:50.157278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.500 [2024-11-20 05:52:50.157354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:30.500 [2024-11-20 05:52:50.157389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.904 ms 00:41:30.500 [2024-11-20 05:52:50.157397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.500 [2024-11-20 05:52:50.195167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.500 [2024-11-20 05:52:50.195220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:30.500 [2024-11-20 05:52:50.195237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.658 ms 00:41:30.500 [2024-11-20 05:52:50.195244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.500 [2024-11-20 05:52:50.195329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:30.500 [2024-11-20 05:52:50.195347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:30.500 [2024-11-20 05:52:50.195360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:30.500 [2024-11-20 05:52:50.195369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:30.500 [2024-11-20 05:52:50.195380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.195992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:30.501 [2024-11-20 05:52:50.196240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:30.502 [2024-11-20 05:52:50.196377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:30.502 [2024-11-20 05:52:50.196391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:41:30.502 [2024-11-20 05:52:50.196399] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:30.502 [2024-11-20 05:52:50.196409] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:30.502 [2024-11-20 05:52:50.196417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:30.502 [2024-11-20 05:52:50.196431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:30.502 [2024-11-20 05:52:50.196439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:30.502 [2024-11-20 05:52:50.196450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:30.502 [2024-11-20 05:52:50.196457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:30.502 [2024-11-20 05:52:50.196466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:30.502 [2024-11-20 05:52:50.196472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:30.502 [2024-11-20 05:52:50.196483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.502 [2024-11-20 05:52:50.196492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:30.502 [2024-11-20 05:52:50.196503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:41:30.502 [2024-11-20 05:52:50.196511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.218505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.502 [2024-11-20 05:52:50.218553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:30.502 [2024-11-20 05:52:50.218572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.987 ms 00:41:30.502 [2024-11-20 05:52:50.218580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.219284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:30.502 [2024-11-20 05:52:50.219301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:30.502 [2024-11-20 05:52:50.219314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:41:30.502 [2024-11-20 05:52:50.219326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.293017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.502 [2024-11-20 05:52:50.293075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:30.502 [2024-11-20 05:52:50.293090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.502 [2024-11-20 05:52:50.293099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.293252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.502 [2024-11-20 05:52:50.293263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:30.502 [2024-11-20 05:52:50.293273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.502 [2024-11-20 05:52:50.293281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.293363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.502 [2024-11-20 05:52:50.293375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:30.502 [2024-11-20 05:52:50.293392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.502 [2024-11-20 05:52:50.293400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.502 [2024-11-20 05:52:50.293437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.502 [2024-11-20 05:52:50.293445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:30.502 [2024-11-20 05:52:50.293456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.502 [2024-11-20 05:52:50.293464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.437364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.437436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:30.762 [2024-11-20 05:52:50.437451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.437460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:30.762 [2024-11-20 05:52:50.550215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:30.762 [2024-11-20 05:52:50.550437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:30.762 [2024-11-20 05:52:50.550542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:30.762 [2024-11-20 05:52:50.550747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:30.762 [2024-11-20 05:52:50.550871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.550945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.550954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:30.762 [2024-11-20 05:52:50.550967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.550974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.551049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:30.762 [2024-11-20 05:52:50.551058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:30.762 [2024-11-20 05:52:50.551068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:30.762 [2024-11-20 05:52:50.551075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:30.762 [2024-11-20 05:52:50.551290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.446 ms, result 0 00:41:30.762 true 00:41:30.762 05:52:50 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76561 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76561 ']' 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76561 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76561 00:41:30.762 killing process with pid 76561 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76561' 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76561 00:41:30.762 05:52:50 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76561 00:41:38.894 05:52:57 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:41:38.894 65536+0 records in 00:41:38.894 65536+0 records out 00:41:38.894 268435456 bytes (268 MB, 256 MiB) copied, 0.872745 s, 308 MB/s 00:41:38.894 05:52:58 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:38.894 [2024-11-20 05:52:58.634729] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:38.894 [2024-11-20 05:52:58.634905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76811 ] 00:41:38.894 [2024-11-20 05:52:58.812185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:39.154 [2024-11-20 05:52:58.940603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.742 [2024-11-20 05:52:59.353794] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:39.742 [2024-11-20 05:52:59.353878] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:39.742 [2024-11-20 05:52:59.515080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.515141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:39.742 [2024-11-20 05:52:59.515156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:39.742 [2024-11-20 05:52:59.515180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.518398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.518440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:39.742 [2024-11-20 05:52:59.518466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.206 ms 00:41:39.742 [2024-11-20 05:52:59.518474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.518576] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:39.742 [2024-11-20 05:52:59.519531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:39.742 [2024-11-20 05:52:59.519564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.519573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:39.742 [2024-11-20 05:52:59.519582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:41:39.742 [2024-11-20 05:52:59.519590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.522092] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:39.742 [2024-11-20 05:52:59.540767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.540815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:39.742 [2024-11-20 05:52:59.540828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.712 ms 00:41:39.742 [2024-11-20 05:52:59.540852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.540940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.540953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:39.742 [2024-11-20 05:52:59.540961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:41:39.742 [2024-11-20 05:52:59.540969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.553134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.553168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:39.742 [2024-11-20 05:52:59.553178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.150 ms 00:41:39.742 [2024-11-20 05:52:59.553185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.553310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.553324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:39.742 [2024-11-20 05:52:59.553333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:41:39.742 [2024-11-20 05:52:59.553340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.553371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.553384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:39.742 [2024-11-20 05:52:59.553391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:39.742 [2024-11-20 05:52:59.553398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.553421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:41:39.742 [2024-11-20 05:52:59.559015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.559047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:39.742 [2024-11-20 05:52:59.559072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:41:39.742 [2024-11-20 05:52:59.559080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.742 [2024-11-20 05:52:59.559128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.742 [2024-11-20 05:52:59.559139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:39.742 [2024-11-20 05:52:59.559148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:39.743 [2024-11-20 05:52:59.559155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.743 [2024-11-20 05:52:59.559174] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:39.743 [2024-11-20 05:52:59.559200] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:39.743 [2024-11-20 05:52:59.559237] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:39.743 [2024-11-20 05:52:59.559252] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:39.743 [2024-11-20 05:52:59.559339] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:39.743 [2024-11-20 05:52:59.559354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:39.743 [2024-11-20 05:52:59.559364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:39.743 [2024-11-20 05:52:59.559373] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559385] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559394] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:41:39.743 [2024-11-20 05:52:59.559402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:39.743 [2024-11-20 05:52:59.559409] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:39.743 [2024-11-20 05:52:59.559416] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:39.743 [2024-11-20 05:52:59.559440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.743 [2024-11-20 05:52:59.559449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:39.743 [2024-11-20 05:52:59.559456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:41:39.743 [2024-11-20 05:52:59.559464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.743 [2024-11-20 05:52:59.559538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.743 [2024-11-20 05:52:59.559550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:39.743 [2024-11-20 05:52:59.559558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:41:39.743 [2024-11-20 05:52:59.559566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.743 [2024-11-20 05:52:59.559653] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:39.743 [2024-11-20 05:52:59.559663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:39.743 [2024-11-20 05:52:59.559673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:39.743 [2024-11-20 05:52:59.559698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:39.743 [2024-11-20 05:52:59.559718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:39.743 [2024-11-20 05:52:59.559731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:39.743 [2024-11-20 05:52:59.559738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:41:39.743 [2024-11-20 05:52:59.559744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:39.743 [2024-11-20 05:52:59.559765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:39.743 [2024-11-20 05:52:59.559772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:41:39.743 [2024-11-20 05:52:59.559779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:39.743 [2024-11-20 05:52:59.559792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:39.743 [2024-11-20 05:52:59.559812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:39.743 [2024-11-20 05:52:59.559844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:39.743 [2024-11-20 05:52:59.559864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:39.743 [2024-11-20 05:52:59.559883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:39.743 [2024-11-20 05:52:59.559897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:39.743 [2024-11-20 05:52:59.559903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:39.743 [2024-11-20 05:52:59.559917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:39.743 [2024-11-20 05:52:59.559924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:41:39.743 [2024-11-20 05:52:59.559930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:39.743 [2024-11-20 05:52:59.559937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:39.743 [2024-11-20 05:52:59.559943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:41:39.743 [2024-11-20 05:52:59.559949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:39.743 [2024-11-20 05:52:59.559962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:41:39.743 [2024-11-20 05:52:59.559968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.559975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:39.743 [2024-11-20 05:52:59.559983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:39.743 [2024-11-20 05:52:59.559990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:39.743 [2024-11-20 05:52:59.560001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:39.743 [2024-11-20 05:52:59.560009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:39.743 [2024-11-20 05:52:59.560016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:39.743 [2024-11-20 05:52:59.560023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:39.743 [2024-11-20 05:52:59.560030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:39.743 [2024-11-20 05:52:59.560037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:39.743 [2024-11-20 05:52:59.560043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:39.743 [2024-11-20 05:52:59.560052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:39.743 [2024-11-20 05:52:59.560060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:41:39.743 [2024-11-20 05:52:59.560077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:41:39.743 [2024-11-20 05:52:59.560084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:41:39.743 [2024-11-20 05:52:59.560091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:41:39.743 [2024-11-20 05:52:59.560098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:41:39.743 [2024-11-20 05:52:59.560105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:41:39.743 [2024-11-20 05:52:59.560112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:41:39.743 [2024-11-20 05:52:59.560119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:41:39.743 [2024-11-20 05:52:59.560127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:41:39.743 [2024-11-20 05:52:59.560133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:41:39.743 [2024-11-20 05:52:59.560170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:39.743 [2024-11-20 05:52:59.560178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:39.743 [2024-11-20 05:52:59.560193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:39.743 [2024-11-20 05:52:59.560200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:39.744 [2024-11-20 05:52:59.560206] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:39.744 [2024-11-20 05:52:59.560215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.744 [2024-11-20 05:52:59.560223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:39.744 [2024-11-20 05:52:59.560235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:41:39.744 [2024-11-20 05:52:59.560242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.744 [2024-11-20 05:52:59.608536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.744 [2024-11-20 05:52:59.608605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:39.744 [2024-11-20 05:52:59.608617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.326 ms 00:41:39.744 [2024-11-20 05:52:59.608627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:39.744 [2024-11-20 05:52:59.608798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:39.744 [2024-11-20 05:52:59.608820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:39.744 [2024-11-20 05:52:59.608830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:41:39.744 [2024-11-20 05:52:59.608837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.689152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.689220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:40.005 [2024-11-20 05:52:59.689232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.446 ms 00:41:40.005 [2024-11-20 05:52:59.689241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.689331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.689341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:40.005 [2024-11-20 05:52:59.689351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:40.005 [2024-11-20 05:52:59.689359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.690170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.690191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:40.005 [2024-11-20 05:52:59.690207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:41:40.005 [2024-11-20 05:52:59.690214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.690341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.690361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:40.005 [2024-11-20 05:52:59.690371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:41:40.005 [2024-11-20 05:52:59.690378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.713665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.713707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:40.005 [2024-11-20 05:52:59.713735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.308 ms 00:41:40.005 [2024-11-20 05:52:59.713743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.733012] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:41:40.005 [2024-11-20 05:52:59.733054] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:40.005 [2024-11-20 05:52:59.733082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.733091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:40.005 [2024-11-20 05:52:59.733101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.210 ms 00:41:40.005 [2024-11-20 05:52:59.733108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.760449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.760489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:40.005 [2024-11-20 05:52:59.760515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.314 ms 00:41:40.005 [2024-11-20 05:52:59.760539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.777358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.777395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:40.005 [2024-11-20 05:52:59.777422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.778 ms 00:41:40.005 [2024-11-20 05:52:59.777428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.794236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.794273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:40.005 [2024-11-20 05:52:59.794298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.767 ms 00:41:40.005 [2024-11-20 05:52:59.794305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.795063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.795094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:40.005 [2024-11-20 05:52:59.795104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:41:40.005 [2024-11-20 05:52:59.795112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.888311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.005 [2024-11-20 05:52:59.888397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:40.005 [2024-11-20 05:52:59.888414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.347 ms 00:41:40.005 [2024-11-20 05:52:59.888440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.005 [2024-11-20 05:52:59.898643] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:41:40.359 [2024-11-20 05:52:59.923651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.923731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:40.359 [2024-11-20 05:52:59.923747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.160 ms 00:41:40.359 [2024-11-20 05:52:59.923772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.923962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.923977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:40.359 [2024-11-20 05:52:59.923987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:40.359 [2024-11-20 05:52:59.923994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.924066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.924076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:40.359 [2024-11-20 05:52:59.924084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:41:40.359 [2024-11-20 05:52:59.924092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.924136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.924152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:40.359 [2024-11-20 05:52:59.924206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:41:40.359 [2024-11-20 05:52:59.924214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.924255] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:40.359 [2024-11-20 05:52:59.924265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.924272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:40.359 [2024-11-20 05:52:59.924280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:40.359 [2024-11-20 05:52:59.924288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.959799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.959848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:40.359 [2024-11-20 05:52:59.959860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.558 ms 00:41:40.359 [2024-11-20 05:52:59.959867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.959997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:40.359 [2024-11-20 05:52:59.960010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:40.359 [2024-11-20 05:52:59.960019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:41:40.359 [2024-11-20 05:52:59.960027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:40.359 [2024-11-20 05:52:59.961358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:40.359 [2024-11-20 05:52:59.965538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.788 ms, result 0 00:41:40.359 [2024-11-20 05:52:59.966447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:40.359 [2024-11-20 05:52:59.983737] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:41.306  [2024-11-20T05:53:02.167Z] Copying: 26/256 [MB] (26 MBps) [2024-11-20T05:53:03.108Z] Copying: 53/256 [MB] (26 MBps) [2024-11-20T05:53:04.047Z] Copying: 80/256 [MB] (27 MBps) [2024-11-20T05:53:04.987Z] Copying: 108/256 [MB] (28 MBps) [2024-11-20T05:53:06.367Z] Copying: 137/256 [MB] (28 MBps) [2024-11-20T05:53:07.308Z] Copying: 165/256 [MB] (27 MBps) [2024-11-20T05:53:08.248Z] Copying: 192/256 [MB] (27 MBps) [2024-11-20T05:53:09.186Z] Copying: 219/256 [MB] (26 MBps) [2024-11-20T05:53:09.445Z] Copying: 246/256 [MB] (27 MBps) [2024-11-20T05:53:09.445Z] Copying: 256/256 [MB] (average 27 MBps)[2024-11-20 05:53:09.326585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:49.526 [2024-11-20 05:53:09.342217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.342277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:49.526 [2024-11-20 05:53:09.342294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:49.526 [2024-11-20 05:53:09.342326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.342350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:41:49.526 [2024-11-20 05:53:09.347137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.347169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:49.526 [2024-11-20 05:53:09.347196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:41:49.526 [2024-11-20 05:53:09.347203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.349267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.349303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:49.526 [2024-11-20 05:53:09.349314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:41:49.526 [2024-11-20 05:53:09.349322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.356003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.356049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:49.526 [2024-11-20 05:53:09.356059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.676 ms 00:41:49.526 [2024-11-20 05:53:09.356066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.361641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.361675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:49.526 [2024-11-20 05:53:09.361701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.535 ms 00:41:49.526 [2024-11-20 05:53:09.361709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.398263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.398343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:49.526 [2024-11-20 05:53:09.398356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.571 ms 00:41:49.526 [2024-11-20 05:53:09.398364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.419151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.419200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:49.526 [2024-11-20 05:53:09.419233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.772 ms 00:41:49.526 [2024-11-20 05:53:09.419241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.526 [2024-11-20 05:53:09.419405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.526 [2024-11-20 05:53:09.419417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:49.526 [2024-11-20 05:53:09.419426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:41:49.526 [2024-11-20 05:53:09.419434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.787 [2024-11-20 05:53:09.455215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.787 [2024-11-20 05:53:09.455263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:49.787 [2024-11-20 05:53:09.455290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.833 ms 00:41:49.787 [2024-11-20 05:53:09.455298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.787 [2024-11-20 05:53:09.489664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.787 [2024-11-20 05:53:09.489711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:49.787 [2024-11-20 05:53:09.489740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.380 ms 00:41:49.787 [2024-11-20 05:53:09.489747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.787 [2024-11-20 05:53:09.523938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.787 [2024-11-20 05:53:09.523980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:49.787 [2024-11-20 05:53:09.524008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.206 ms 00:41:49.787 [2024-11-20 05:53:09.524016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.787 [2024-11-20 05:53:09.557633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.787 [2024-11-20 05:53:09.557677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:49.787 [2024-11-20 05:53:09.557688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.579 ms 00:41:49.787 [2024-11-20 05:53:09.557696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.787 [2024-11-20 05:53:09.557761] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:49.787 [2024-11-20 05:53:09.557778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.557996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.558003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:49.787 [2024-11-20 05:53:09.558011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:49.788 [2024-11-20 05:53:09.558568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:49.788 [2024-11-20 05:53:09.558577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:41:49.788 [2024-11-20 05:53:09.558586] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:49.788 [2024-11-20 05:53:09.558594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:49.788 [2024-11-20 05:53:09.558602] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:49.788 [2024-11-20 05:53:09.558611] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:49.788 [2024-11-20 05:53:09.558619] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:49.788 [2024-11-20 05:53:09.558627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:49.788 [2024-11-20 05:53:09.558635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:49.788 [2024-11-20 05:53:09.558641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:49.788 [2024-11-20 05:53:09.558648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:49.788 [2024-11-20 05:53:09.558656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.788 [2024-11-20 05:53:09.558668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:49.788 [2024-11-20 05:53:09.558677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:41:49.788 [2024-11-20 05:53:09.558685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.788 [2024-11-20 05:53:09.580062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.788 [2024-11-20 05:53:09.580099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:49.788 [2024-11-20 05:53:09.580110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.397 ms 00:41:49.788 [2024-11-20 05:53:09.580118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.788 [2024-11-20 05:53:09.580787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:49.788 [2024-11-20 05:53:09.580822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:49.788 [2024-11-20 05:53:09.580832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:41:49.788 [2024-11-20 05:53:09.580840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.788 [2024-11-20 05:53:09.636434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:49.789 [2024-11-20 05:53:09.636482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:49.789 [2024-11-20 05:53:09.636494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:49.789 [2024-11-20 05:53:09.636519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.789 [2024-11-20 05:53:09.636644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:49.789 [2024-11-20 05:53:09.636653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:49.789 [2024-11-20 05:53:09.636662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:49.789 [2024-11-20 05:53:09.636669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.789 [2024-11-20 05:53:09.636723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:49.789 [2024-11-20 05:53:09.636735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:49.789 [2024-11-20 05:53:09.636743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:49.789 [2024-11-20 05:53:09.636751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:49.789 [2024-11-20 05:53:09.636770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:49.789 [2024-11-20 05:53:09.636783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:49.789 [2024-11-20 05:53:09.636791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:49.789 [2024-11-20 05:53:09.636798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.768625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.768706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:50.048 [2024-11-20 05:53:09.768722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.768730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.872432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.872497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:50.048 [2024-11-20 05:53:09.872513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.872522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.872635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.872645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:50.048 [2024-11-20 05:53:09.872654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.872663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.872692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.872701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:50.048 [2024-11-20 05:53:09.872717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.872725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.872852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.872870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:50.048 [2024-11-20 05:53:09.872879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.872887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.872927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.872938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:50.048 [2024-11-20 05:53:09.872946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.872958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.873023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.873032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:50.048 [2024-11-20 05:53:09.873041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.873049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.873102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:50.048 [2024-11-20 05:53:09.873112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:50.048 [2024-11-20 05:53:09.873124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:50.048 [2024-11-20 05:53:09.873131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:50.048 [2024-11-20 05:53:09.873301] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.094 ms, result 0 00:41:51.424 00:41:51.424 00:41:51.424 05:53:11 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76941 00:41:51.424 05:53:11 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:41:51.424 05:53:11 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76941 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 76941 ']' 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:41:51.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:41:51.424 05:53:11 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:41:51.424 [2024-11-20 05:53:11.334830] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:51.424 [2024-11-20 05:53:11.334981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76941 ] 00:41:51.683 [2024-11-20 05:53:11.514595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.942 [2024-11-20 05:53:11.645032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.879 05:53:12 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:41:52.879 05:53:12 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:41:52.879 05:53:12 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:41:53.140 [2024-11-20 05:53:12.835207] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:53.140 [2024-11-20 05:53:12.835291] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:53.140 [2024-11-20 05:53:13.012782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.012880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:53.140 [2024-11-20 05:53:13.012898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:53.140 [2024-11-20 05:53:13.012906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.016142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.016175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:53.140 [2024-11-20 05:53:13.016187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:41:53.140 [2024-11-20 05:53:13.016195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.016285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:53.140 [2024-11-20 05:53:13.017289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:53.140 [2024-11-20 05:53:13.017316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.017325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:53.140 [2024-11-20 05:53:13.017336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:41:53.140 [2024-11-20 05:53:13.017343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.019868] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:53.140 [2024-11-20 05:53:13.039826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.039890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:53.140 [2024-11-20 05:53:13.039903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.001 ms 00:41:53.140 [2024-11-20 05:53:13.039914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.040007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.040020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:53.140 [2024-11-20 05:53:13.040029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:41:53.140 [2024-11-20 05:53:13.040038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.052528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.052568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:53.140 [2024-11-20 05:53:13.052594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.461 ms 00:41:53.140 [2024-11-20 05:53:13.052605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.052739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.052755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:53.140 [2024-11-20 05:53:13.052765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:41:53.140 [2024-11-20 05:53:13.052775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.052810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.140 [2024-11-20 05:53:13.052839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:53.140 [2024-11-20 05:53:13.052847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:53.140 [2024-11-20 05:53:13.052864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.140 [2024-11-20 05:53:13.052908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:41:53.401 [2024-11-20 05:53:13.058541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.401 [2024-11-20 05:53:13.058570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:53.401 [2024-11-20 05:53:13.058583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.649 ms 00:41:53.401 [2024-11-20 05:53:13.058591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.401 [2024-11-20 05:53:13.058654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.401 [2024-11-20 05:53:13.058664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:53.401 [2024-11-20 05:53:13.058676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:53.401 [2024-11-20 05:53:13.058689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.401 [2024-11-20 05:53:13.058716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:53.401 [2024-11-20 05:53:13.058743] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:53.401 [2024-11-20 05:53:13.058797] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:53.401 [2024-11-20 05:53:13.058826] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:53.401 [2024-11-20 05:53:13.058937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:53.401 [2024-11-20 05:53:13.058951] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:53.401 [2024-11-20 05:53:13.058969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:53.401 [2024-11-20 05:53:13.058980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:53.401 [2024-11-20 05:53:13.058991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:53.401 [2024-11-20 05:53:13.059000] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:41:53.401 [2024-11-20 05:53:13.059017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:53.401 [2024-11-20 05:53:13.059024] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:53.401 [2024-11-20 05:53:13.059036] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:53.401 [2024-11-20 05:53:13.059045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.401 [2024-11-20 05:53:13.059054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:53.401 [2024-11-20 05:53:13.059062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:41:53.401 [2024-11-20 05:53:13.059072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.401 [2024-11-20 05:53:13.059151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.401 [2024-11-20 05:53:13.059161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:53.401 [2024-11-20 05:53:13.059169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:41:53.401 [2024-11-20 05:53:13.059178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.401 [2024-11-20 05:53:13.059277] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:53.401 [2024-11-20 05:53:13.059296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:53.401 [2024-11-20 05:53:13.059305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:53.401 [2024-11-20 05:53:13.059316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.401 [2024-11-20 05:53:13.059324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:53.401 [2024-11-20 05:53:13.059333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:53.401 [2024-11-20 05:53:13.059340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:41:53.401 [2024-11-20 05:53:13.059353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:53.401 [2024-11-20 05:53:13.059360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:41:53.401 [2024-11-20 05:53:13.059369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:53.401 [2024-11-20 05:53:13.059376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:53.402 [2024-11-20 05:53:13.059385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:41:53.402 [2024-11-20 05:53:13.059391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:53.402 [2024-11-20 05:53:13.059399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:53.402 [2024-11-20 05:53:13.059406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:41:53.402 [2024-11-20 05:53:13.059414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:53.402 [2024-11-20 05:53:13.059429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:53.402 [2024-11-20 05:53:13.059460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:53.402 [2024-11-20 05:53:13.059487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:53.402 [2024-11-20 05:53:13.059508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:53.402 [2024-11-20 05:53:13.059533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:53.402 [2024-11-20 05:53:13.059556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:53.402 [2024-11-20 05:53:13.059570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:53.402 [2024-11-20 05:53:13.059582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:41:53.402 [2024-11-20 05:53:13.059588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:53.402 [2024-11-20 05:53:13.059601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:53.402 [2024-11-20 05:53:13.059609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:41:53.402 [2024-11-20 05:53:13.059625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:53.402 [2024-11-20 05:53:13.059643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:41:53.402 [2024-11-20 05:53:13.059650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059661] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:53.402 [2024-11-20 05:53:13.059674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:53.402 [2024-11-20 05:53:13.059686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:53.402 [2024-11-20 05:53:13.059706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:53.402 [2024-11-20 05:53:13.059713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:53.402 [2024-11-20 05:53:13.059724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:53.402 [2024-11-20 05:53:13.059731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:53.402 [2024-11-20 05:53:13.059742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:53.402 [2024-11-20 05:53:13.059749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:53.402 [2024-11-20 05:53:13.059762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:53.402 [2024-11-20 05:53:13.059772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:41:53.402 [2024-11-20 05:53:13.059799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:41:53.402 [2024-11-20 05:53:13.059821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:41:53.402 [2024-11-20 05:53:13.059829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:41:53.402 [2024-11-20 05:53:13.059841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:41:53.402 [2024-11-20 05:53:13.059849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:41:53.402 [2024-11-20 05:53:13.059863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:41:53.402 [2024-11-20 05:53:13.059871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:41:53.402 [2024-11-20 05:53:13.059885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:41:53.402 [2024-11-20 05:53:13.059892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:41:53.402 [2024-11-20 05:53:13.059942] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:53.402 [2024-11-20 05:53:13.059951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:53.402 [2024-11-20 05:53:13.059974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:53.402 [2024-11-20 05:53:13.059985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:53.402 [2024-11-20 05:53:13.059993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:53.402 [2024-11-20 05:53:13.060005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.402 [2024-11-20 05:53:13.060014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:53.402 [2024-11-20 05:53:13.060026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:41:53.402 [2024-11-20 05:53:13.060034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.402 [2024-11-20 05:53:13.108759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.402 [2024-11-20 05:53:13.108820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:53.402 [2024-11-20 05:53:13.108839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.741 ms 00:41:53.402 [2024-11-20 05:53:13.108854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.402 [2024-11-20 05:53:13.109027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.402 [2024-11-20 05:53:13.109038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:53.402 [2024-11-20 05:53:13.109051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:41:53.402 [2024-11-20 05:53:13.109059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.402 [2024-11-20 05:53:13.160992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.402 [2024-11-20 05:53:13.161040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:53.402 [2024-11-20 05:53:13.161057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.000 ms 00:41:53.403 [2024-11-20 05:53:13.161065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.161164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.161174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:53.403 [2024-11-20 05:53:13.161187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:53.403 [2024-11-20 05:53:13.161194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.162017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.162034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:53.403 [2024-11-20 05:53:13.162052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:41:53.403 [2024-11-20 05:53:13.162061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.162198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.162214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:53.403 [2024-11-20 05:53:13.162227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:41:53.403 [2024-11-20 05:53:13.162235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.187875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.187911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:53.403 [2024-11-20 05:53:13.187939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.658 ms 00:41:53.403 [2024-11-20 05:53:13.187947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.218940] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:41:53.403 [2024-11-20 05:53:13.218975] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:53.403 [2024-11-20 05:53:13.219008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.219016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:53.403 [2024-11-20 05:53:13.219029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.987 ms 00:41:53.403 [2024-11-20 05:53:13.219038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.247696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.247749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:53.403 [2024-11-20 05:53:13.247783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.612 ms 00:41:53.403 [2024-11-20 05:53:13.247792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.265618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.265651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:53.403 [2024-11-20 05:53:13.265671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.779 ms 00:41:53.403 [2024-11-20 05:53:13.265678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.282776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.282817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:53.403 [2024-11-20 05:53:13.282834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.054 ms 00:41:53.403 [2024-11-20 05:53:13.282841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.403 [2024-11-20 05:53:13.283698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.403 [2024-11-20 05:53:13.283724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:53.403 [2024-11-20 05:53:13.283738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:41:53.403 [2024-11-20 05:53:13.283746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.380667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.380747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:53.663 [2024-11-20 05:53:13.380769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.068 ms 00:41:53.663 [2024-11-20 05:53:13.380779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.392351] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:41:53.663 [2024-11-20 05:53:13.419321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.419401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:53.663 [2024-11-20 05:53:13.419421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.435 ms 00:41:53.663 [2024-11-20 05:53:13.419433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.419580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.419597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:53.663 [2024-11-20 05:53:13.419607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:41:53.663 [2024-11-20 05:53:13.419620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.419704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.419718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:53.663 [2024-11-20 05:53:13.419727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:41:53.663 [2024-11-20 05:53:13.419745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.419770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.419783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:53.663 [2024-11-20 05:53:13.419791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:53.663 [2024-11-20 05:53:13.419823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.419889] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:53.663 [2024-11-20 05:53:13.419910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.419919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:53.663 [2024-11-20 05:53:13.419939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:53.663 [2024-11-20 05:53:13.419947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.457490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.457543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:53.663 [2024-11-20 05:53:13.457562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.567 ms 00:41:53.663 [2024-11-20 05:53:13.457570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.457696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.663 [2024-11-20 05:53:13.457711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:53.663 [2024-11-20 05:53:13.457725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:41:53.663 [2024-11-20 05:53:13.457739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.663 [2024-11-20 05:53:13.459168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:53.663 [2024-11-20 05:53:13.463562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.818 ms, result 0 00:41:53.663 [2024-11-20 05:53:13.464725] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:53.663 Some configs were skipped because the RPC state that can call them passed over. 00:41:53.663 05:53:13 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:41:53.923 [2024-11-20 05:53:13.684395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:53.923 [2024-11-20 05:53:13.684476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:41:53.923 [2024-11-20 05:53:13.684494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:41:53.923 [2024-11-20 05:53:13.684509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:53.923 [2024-11-20 05:53:13.684555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.935 ms, result 0 00:41:53.923 true 00:41:53.923 05:53:13 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:41:54.182 [2024-11-20 05:53:13.891800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:54.182 [2024-11-20 05:53:13.891868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:41:54.182 [2024-11-20 05:53:13.891891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:41:54.182 [2024-11-20 05:53:13.891901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:54.182 [2024-11-20 05:53:13.891957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.513 ms, result 0 00:41:54.182 true 00:41:54.182 05:53:13 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76941 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 76941 ']' 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 76941 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76941 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76941' 00:41:54.182 killing process with pid 76941 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 76941 00:41:54.182 05:53:13 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 76941 00:41:55.563 [2024-11-20 05:53:15.152283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.563 [2024-11-20 05:53:15.152475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:55.563 [2024-11-20 05:53:15.152513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:55.563 [2024-11-20 05:53:15.152535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.563 [2024-11-20 05:53:15.152574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:41:55.564 [2024-11-20 05:53:15.157325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.157402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:55.564 [2024-11-20 05:53:15.157434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:41:55.564 [2024-11-20 05:53:15.157453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.157783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.157840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:55.564 [2024-11-20 05:53:15.157876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:41:55.564 [2024-11-20 05:53:15.157898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.161231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.161304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:55.564 [2024-11-20 05:53:15.161336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:41:55.564 [2024-11-20 05:53:15.161356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.166908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.166976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:55.564 [2024-11-20 05:53:15.167005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.506 ms 00:41:55.564 [2024-11-20 05:53:15.167025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.181576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.181643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:55.564 [2024-11-20 05:53:15.181678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.506 ms 00:41:55.564 [2024-11-20 05:53:15.181711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.191991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.192060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:55.564 [2024-11-20 05:53:15.192091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.204 ms 00:41:55.564 [2024-11-20 05:53:15.192111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.192268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.192324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:55.564 [2024-11-20 05:53:15.192347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:41:55.564 [2024-11-20 05:53:15.192378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.207515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.207607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:55.564 [2024-11-20 05:53:15.207637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.131 ms 00:41:55.564 [2024-11-20 05:53:15.207656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.222070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.222169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:55.564 [2024-11-20 05:53:15.222216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.361 ms 00:41:55.564 [2024-11-20 05:53:15.222234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.236103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.236164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:55.564 [2024-11-20 05:53:15.236212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.816 ms 00:41:55.564 [2024-11-20 05:53:15.236231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.249805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.564 [2024-11-20 05:53:15.249874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:55.564 [2024-11-20 05:53:15.249904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.496 ms 00:41:55.564 [2024-11-20 05:53:15.249923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.564 [2024-11-20 05:53:15.249978] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:55.564 [2024-11-20 05:53:15.250007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.250998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:55.564 [2024-11-20 05:53:15.251928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.251996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:55.565 [2024-11-20 05:53:15.252436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:55.565 [2024-11-20 05:53:15.252459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:41:55.565 [2024-11-20 05:53:15.252484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:55.565 [2024-11-20 05:53:15.252503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:55.565 [2024-11-20 05:53:15.252511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:55.565 [2024-11-20 05:53:15.252524] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:55.565 [2024-11-20 05:53:15.252531] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:55.565 [2024-11-20 05:53:15.252545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:55.565 [2024-11-20 05:53:15.252552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:55.565 [2024-11-20 05:53:15.252563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:55.565 [2024-11-20 05:53:15.252570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:55.565 [2024-11-20 05:53:15.252582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.565 [2024-11-20 05:53:15.252590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:55.565 [2024-11-20 05:53:15.252604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:41:55.565 [2024-11-20 05:53:15.252611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.272411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.565 [2024-11-20 05:53:15.272443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:55.565 [2024-11-20 05:53:15.272478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.789 ms 00:41:55.565 [2024-11-20 05:53:15.272487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.273106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:55.565 [2024-11-20 05:53:15.273117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:55.565 [2024-11-20 05:53:15.273131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:41:55.565 [2024-11-20 05:53:15.273143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.343359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.565 [2024-11-20 05:53:15.343405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:55.565 [2024-11-20 05:53:15.343422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.565 [2024-11-20 05:53:15.343430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.343533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.565 [2024-11-20 05:53:15.343543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:55.565 [2024-11-20 05:53:15.343556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.565 [2024-11-20 05:53:15.343569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.343634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.565 [2024-11-20 05:53:15.343645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:55.565 [2024-11-20 05:53:15.343665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.565 [2024-11-20 05:53:15.343672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.343696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.565 [2024-11-20 05:53:15.343704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:55.565 [2024-11-20 05:53:15.343717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.565 [2024-11-20 05:53:15.343725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.565 [2024-11-20 05:53:15.470932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.565 [2024-11-20 05:53:15.470997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:55.565 [2024-11-20 05:53:15.471031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.565 [2024-11-20 05:53:15.471040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.573868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.573937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:55.825 [2024-11-20 05:53:15.573955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.573970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:55.825 [2024-11-20 05:53:15.574138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:55.825 [2024-11-20 05:53:15.574203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:55.825 [2024-11-20 05:53:15.574371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:55.825 [2024-11-20 05:53:15.574452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:55.825 [2024-11-20 05:53:15.574543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:55.825 [2024-11-20 05:53:15.574618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:55.825 [2024-11-20 05:53:15.574631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:55.825 [2024-11-20 05:53:15.574639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:55.825 [2024-11-20 05:53:15.574827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 423.306 ms, result 0 00:41:56.764 05:53:16 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:41:56.764 05:53:16 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:41:57.024 [2024-11-20 05:53:16.737609] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:41:57.024 [2024-11-20 05:53:16.737755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:41:57.024 [2024-11-20 05:53:16.914533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:57.284 [2024-11-20 05:53:17.053081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:57.854 [2024-11-20 05:53:17.476883] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:57.854 [2024-11-20 05:53:17.477074] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:57.854 [2024-11-20 05:53:17.638066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.638127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:57.854 [2024-11-20 05:53:17.638142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:57.854 [2024-11-20 05:53:17.638151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.641277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.641314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:57.854 [2024-11-20 05:53:17.641324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:41:57.854 [2024-11-20 05:53:17.641349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.641434] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:57.854 [2024-11-20 05:53:17.642451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:57.854 [2024-11-20 05:53:17.642486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.642495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:57.854 [2024-11-20 05:53:17.642504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:41:57.854 [2024-11-20 05:53:17.642512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.645019] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:57.854 [2024-11-20 05:53:17.664778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.664834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:57.854 [2024-11-20 05:53:17.664846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.797 ms 00:41:57.854 [2024-11-20 05:53:17.664855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.664968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.664981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:57.854 [2024-11-20 05:53:17.664990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:41:57.854 [2024-11-20 05:53:17.664998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.677388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.677417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:57.854 [2024-11-20 05:53:17.677428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.372 ms 00:41:57.854 [2024-11-20 05:53:17.677436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.677561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.677575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:57.854 [2024-11-20 05:53:17.677585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:41:57.854 [2024-11-20 05:53:17.677593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.677626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.677639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:57.854 [2024-11-20 05:53:17.677647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:57.854 [2024-11-20 05:53:17.677654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.677680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:41:57.854 [2024-11-20 05:53:17.683545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.683574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:57.854 [2024-11-20 05:53:17.683585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.885 ms 00:41:57.854 [2024-11-20 05:53:17.683592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.683642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.683651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:57.854 [2024-11-20 05:53:17.683660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:57.854 [2024-11-20 05:53:17.683667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.683686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:57.854 [2024-11-20 05:53:17.683714] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:57.854 [2024-11-20 05:53:17.683750] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:57.854 [2024-11-20 05:53:17.683765] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:57.854 [2024-11-20 05:53:17.683861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:57.854 [2024-11-20 05:53:17.683872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:57.854 [2024-11-20 05:53:17.683882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:57.854 [2024-11-20 05:53:17.683891] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:57.854 [2024-11-20 05:53:17.683905] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:57.854 [2024-11-20 05:53:17.683913] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:41:57.854 [2024-11-20 05:53:17.683921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:57.854 [2024-11-20 05:53:17.683929] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:57.854 [2024-11-20 05:53:17.683936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:57.854 [2024-11-20 05:53:17.683944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.683952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:57.854 [2024-11-20 05:53:17.683960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:41:57.854 [2024-11-20 05:53:17.683968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.684041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.854 [2024-11-20 05:53:17.684054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:57.854 [2024-11-20 05:53:17.684060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:41:57.854 [2024-11-20 05:53:17.684068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.854 [2024-11-20 05:53:17.684154] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:57.854 [2024-11-20 05:53:17.684164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:57.854 [2024-11-20 05:53:17.684171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:57.854 [2024-11-20 05:53:17.684179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:57.854 [2024-11-20 05:53:17.684194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:41:57.854 [2024-11-20 05:53:17.684210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:57.854 [2024-11-20 05:53:17.684218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:57.854 [2024-11-20 05:53:17.684232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:57.854 [2024-11-20 05:53:17.684239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:41:57.854 [2024-11-20 05:53:17.684245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:57.854 [2024-11-20 05:53:17.684265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:57.854 [2024-11-20 05:53:17.684273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:41:57.854 [2024-11-20 05:53:17.684280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:57.854 [2024-11-20 05:53:17.684295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:41:57.854 [2024-11-20 05:53:17.684302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:57.854 [2024-11-20 05:53:17.684315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:57.854 [2024-11-20 05:53:17.684329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:57.854 [2024-11-20 05:53:17.684336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:57.854 [2024-11-20 05:53:17.684349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:57.854 [2024-11-20 05:53:17.684356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:41:57.854 [2024-11-20 05:53:17.684363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:57.855 [2024-11-20 05:53:17.684369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:57.855 [2024-11-20 05:53:17.684376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:41:57.855 [2024-11-20 05:53:17.684382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:57.855 [2024-11-20 05:53:17.684388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:57.855 [2024-11-20 05:53:17.684394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:41:57.855 [2024-11-20 05:53:17.684401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:57.855 [2024-11-20 05:53:17.684407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:57.855 [2024-11-20 05:53:17.684413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:41:57.855 [2024-11-20 05:53:17.684419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:57.855 [2024-11-20 05:53:17.684425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:57.855 [2024-11-20 05:53:17.684431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:41:57.855 [2024-11-20 05:53:17.684437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.855 [2024-11-20 05:53:17.684444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:57.855 [2024-11-20 05:53:17.684450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:41:57.855 [2024-11-20 05:53:17.684457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.855 [2024-11-20 05:53:17.684463] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:57.855 [2024-11-20 05:53:17.684470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:57.855 [2024-11-20 05:53:17.684478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:57.855 [2024-11-20 05:53:17.684489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:57.855 [2024-11-20 05:53:17.684496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:57.855 [2024-11-20 05:53:17.684503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:57.855 [2024-11-20 05:53:17.684509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:57.855 [2024-11-20 05:53:17.684516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:57.855 [2024-11-20 05:53:17.684521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:57.855 [2024-11-20 05:53:17.684528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:57.855 [2024-11-20 05:53:17.684537] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:57.855 [2024-11-20 05:53:17.684547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:41:57.855 [2024-11-20 05:53:17.684562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:41:57.855 [2024-11-20 05:53:17.684570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:41:57.855 [2024-11-20 05:53:17.684577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:41:57.855 [2024-11-20 05:53:17.684584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:41:57.855 [2024-11-20 05:53:17.684591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:41:57.855 [2024-11-20 05:53:17.684598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:41:57.855 [2024-11-20 05:53:17.684605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:41:57.855 [2024-11-20 05:53:17.684611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:41:57.855 [2024-11-20 05:53:17.684619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:41:57.855 [2024-11-20 05:53:17.684652] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:57.855 [2024-11-20 05:53:17.684660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:57.855 [2024-11-20 05:53:17.684674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:57.855 [2024-11-20 05:53:17.684681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:57.855 [2024-11-20 05:53:17.684689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:57.855 [2024-11-20 05:53:17.684697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.855 [2024-11-20 05:53:17.684705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:57.855 [2024-11-20 05:53:17.684717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:41:57.855 [2024-11-20 05:53:17.684724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.855 [2024-11-20 05:53:17.735075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.855 [2024-11-20 05:53:17.735200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:57.855 [2024-11-20 05:53:17.735248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.386 ms 00:41:57.855 [2024-11-20 05:53:17.735269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:57.855 [2024-11-20 05:53:17.735474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:57.855 [2024-11-20 05:53:17.735514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:57.855 [2024-11-20 05:53:17.735538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:41:57.855 [2024-11-20 05:53:17.735585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.797749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.797883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:58.115 [2024-11-20 05:53:17.797920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.236 ms 00:41:58.115 [2024-11-20 05:53:17.797948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.798084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.798131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:58.115 [2024-11-20 05:53:17.798155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:58.115 [2024-11-20 05:53:17.798211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.799008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.799053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:58.115 [2024-11-20 05:53:17.799083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:41:58.115 [2024-11-20 05:53:17.799110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.799254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.799292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:58.115 [2024-11-20 05:53:17.799320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:41:58.115 [2024-11-20 05:53:17.799348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.822855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.822938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:58.115 [2024-11-20 05:53:17.822984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.506 ms 00:41:58.115 [2024-11-20 05:53:17.823004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.842645] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:41:58.115 [2024-11-20 05:53:17.842734] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:58.115 [2024-11-20 05:53:17.842787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.842808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:58.115 [2024-11-20 05:53:17.842843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.671 ms 00:41:58.115 [2024-11-20 05:53:17.842863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.871679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.871771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:58.115 [2024-11-20 05:53:17.871823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.765 ms 00:41:58.115 [2024-11-20 05:53:17.871845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.889834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.889908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:58.115 [2024-11-20 05:53:17.889950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.934 ms 00:41:58.115 [2024-11-20 05:53:17.889969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.906978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.907049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:58.115 [2024-11-20 05:53:17.907091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.957 ms 00:41:58.115 [2024-11-20 05:53:17.907110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:17.907976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:17.908039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:58.115 [2024-11-20 05:53:17.908073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:41:58.115 [2024-11-20 05:53:17.908095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:18.003641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.115 [2024-11-20 05:53:18.003824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:58.115 [2024-11-20 05:53:18.003845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.653 ms 00:41:58.115 [2024-11-20 05:53:18.003855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.115 [2024-11-20 05:53:18.015071] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:41:58.374 [2024-11-20 05:53:18.042124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.374 [2024-11-20 05:53:18.042185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:58.375 [2024-11-20 05:53:18.042201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.168 ms 00:41:58.375 [2024-11-20 05:53:18.042216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.042365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.042378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:58.375 [2024-11-20 05:53:18.042387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:41:58.375 [2024-11-20 05:53:18.042396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.042468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.042477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:58.375 [2024-11-20 05:53:18.042487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:41:58.375 [2024-11-20 05:53:18.042495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.042543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.042557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:58.375 [2024-11-20 05:53:18.042566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:41:58.375 [2024-11-20 05:53:18.042574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.042615] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:58.375 [2024-11-20 05:53:18.042625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.042634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:58.375 [2024-11-20 05:53:18.042643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:58.375 [2024-11-20 05:53:18.042651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.080135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.080245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:58.375 [2024-11-20 05:53:18.080263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.530 ms 00:41:58.375 [2024-11-20 05:53:18.080272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.080392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:58.375 [2024-11-20 05:53:18.080404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:58.375 [2024-11-20 05:53:18.080414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:41:58.375 [2024-11-20 05:53:18.080422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:58.375 [2024-11-20 05:53:18.081817] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:58.375 [2024-11-20 05:53:18.086228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.211 ms, result 0 00:41:58.375 [2024-11-20 05:53:18.087070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:58.375 [2024-11-20 05:53:18.105248] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:59.313  [2024-11-20T05:53:20.171Z] Copying: 31/256 [MB] (31 MBps) [2024-11-20T05:53:21.105Z] Copying: 62/256 [MB] (30 MBps) [2024-11-20T05:53:22.480Z] Copying: 92/256 [MB] (30 MBps) [2024-11-20T05:53:23.418Z] Copying: 120/256 [MB] (28 MBps) [2024-11-20T05:53:24.356Z] Copying: 150/256 [MB] (29 MBps) [2024-11-20T05:53:25.294Z] Copying: 180/256 [MB] (29 MBps) [2024-11-20T05:53:26.232Z] Copying: 209/256 [MB] (29 MBps) [2024-11-20T05:53:26.800Z] Copying: 237/256 [MB] (28 MBps) [2024-11-20T05:53:26.800Z] Copying: 256/256 [MB] (average 29 MBps)[2024-11-20 05:53:26.720634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:06.881 [2024-11-20 05:53:26.736117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.881 [2024-11-20 05:53:26.736224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:06.881 [2024-11-20 05:53:26.736255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:06.881 [2024-11-20 05:53:26.736289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:06.881 [2024-11-20 05:53:26.736325] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:42:06.881 [2024-11-20 05:53:26.741213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.881 [2024-11-20 05:53:26.741278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:06.881 [2024-11-20 05:53:26.741323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:42:06.881 [2024-11-20 05:53:26.741343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:06.881 [2024-11-20 05:53:26.741618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.881 [2024-11-20 05:53:26.741648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:06.881 [2024-11-20 05:53:26.741670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:42:06.881 [2024-11-20 05:53:26.741726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:06.881 [2024-11-20 05:53:26.744591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.881 [2024-11-20 05:53:26.744642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:06.881 [2024-11-20 05:53:26.744667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:42:06.881 [2024-11-20 05:53:26.744702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:06.881 [2024-11-20 05:53:26.750219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.881 [2024-11-20 05:53:26.750278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:06.881 [2024-11-20 05:53:26.750305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.498 ms 00:42:06.881 [2024-11-20 05:53:26.750324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:06.881 [2024-11-20 05:53:26.785536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:06.882 [2024-11-20 05:53:26.785616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:06.882 [2024-11-20 05:53:26.785661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.204 ms 00:42:06.882 [2024-11-20 05:53:26.785681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.806207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.806286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:07.142 [2024-11-20 05:53:26.806336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.516 ms 00:42:07.142 [2024-11-20 05:53:26.806356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.806500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.806550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:07.142 [2024-11-20 05:53:26.806587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:42:07.142 [2024-11-20 05:53:26.806608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.842526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.842600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:07.142 [2024-11-20 05:53:26.842642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.875 ms 00:42:07.142 [2024-11-20 05:53:26.842662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.877390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.877463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:07.142 [2024-11-20 05:53:26.877516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.749 ms 00:42:07.142 [2024-11-20 05:53:26.877536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.911794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.911870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:07.142 [2024-11-20 05:53:26.911913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.277 ms 00:42:07.142 [2024-11-20 05:53:26.911932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.946015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.142 [2024-11-20 05:53:26.946105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:07.142 [2024-11-20 05:53:26.946133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.074 ms 00:42:07.142 [2024-11-20 05:53:26.946152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.142 [2024-11-20 05:53:26.946197] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:07.142 [2024-11-20 05:53:26.946224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:07.142 [2024-11-20 05:53:26.946254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.946992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:07.143 [2024-11-20 05:53:26.947244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:07.144 [2024-11-20 05:53:26.947356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:07.144 [2024-11-20 05:53:26.947363] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:42:07.144 [2024-11-20 05:53:26.947371] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:07.144 [2024-11-20 05:53:26.947378] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:07.144 [2024-11-20 05:53:26.947385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:07.144 [2024-11-20 05:53:26.947393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:07.144 [2024-11-20 05:53:26.947400] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:07.144 [2024-11-20 05:53:26.947408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:07.144 [2024-11-20 05:53:26.947416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:07.144 [2024-11-20 05:53:26.947422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:07.144 [2024-11-20 05:53:26.947428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:07.144 [2024-11-20 05:53:26.947436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.144 [2024-11-20 05:53:26.947448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:07.144 [2024-11-20 05:53:26.947457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:42:07.144 [2024-11-20 05:53:26.947464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:26.967981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.144 [2024-11-20 05:53:26.968053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:07.144 [2024-11-20 05:53:26.968067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.533 ms 00:42:07.144 [2024-11-20 05:53:26.968075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:26.968696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:07.144 [2024-11-20 05:53:26.968711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:07.144 [2024-11-20 05:53:26.968720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:42:07.144 [2024-11-20 05:53:26.968728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:27.026195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.144 [2024-11-20 05:53:27.026305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:07.144 [2024-11-20 05:53:27.026320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.144 [2024-11-20 05:53:27.026328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:27.026443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.144 [2024-11-20 05:53:27.026454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:07.144 [2024-11-20 05:53:27.026463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.144 [2024-11-20 05:53:27.026470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:27.026533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.144 [2024-11-20 05:53:27.026545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:07.144 [2024-11-20 05:53:27.026569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.144 [2024-11-20 05:53:27.026577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.144 [2024-11-20 05:53:27.026598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.144 [2024-11-20 05:53:27.026615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:07.144 [2024-11-20 05:53:27.026624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.144 [2024-11-20 05:53:27.026632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.161357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.161436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:07.404 [2024-11-20 05:53:27.161451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.161459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.268973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:07.404 [2024-11-20 05:53:27.269125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.269266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:07.404 [2024-11-20 05:53:27.269285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.269325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:07.404 [2024-11-20 05:53:27.269350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.269478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:07.404 [2024-11-20 05:53:27.269525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.269576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:07.404 [2024-11-20 05:53:27.269595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.404 [2024-11-20 05:53:27.269656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.404 [2024-11-20 05:53:27.269666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:07.404 [2024-11-20 05:53:27.269673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.404 [2024-11-20 05:53:27.269682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.405 [2024-11-20 05:53:27.269735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:07.405 [2024-11-20 05:53:27.269745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:07.405 [2024-11-20 05:53:27.269758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:07.405 [2024-11-20 05:53:27.269766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:07.405 [2024-11-20 05:53:27.269959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.850 ms, result 0 00:42:08.790 00:42:08.790 00:42:08.790 05:53:28 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:42:08.790 05:53:28 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:42:09.064 05:53:28 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:09.064 [2024-11-20 05:53:28.914627] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:09.064 [2024-11-20 05:53:28.914782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77137 ] 00:42:09.339 [2024-11-20 05:53:29.079215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:09.339 [2024-11-20 05:53:29.209215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:09.918 [2024-11-20 05:53:29.610795] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:09.918 [2024-11-20 05:53:29.610890] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:09.918 [2024-11-20 05:53:29.772379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.772445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:09.918 [2024-11-20 05:53:29.772459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:09.918 [2024-11-20 05:53:29.772486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.775723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.775814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:09.918 [2024-11-20 05:53:29.775845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:42:09.918 [2024-11-20 05:53:29.775854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.775964] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:09.918 [2024-11-20 05:53:29.776945] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:09.918 [2024-11-20 05:53:29.776981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.776990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:09.918 [2024-11-20 05:53:29.776998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:42:09.918 [2024-11-20 05:53:29.777006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.779488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:09.918 [2024-11-20 05:53:29.797892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.797939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:09.918 [2024-11-20 05:53:29.797950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.439 ms 00:42:09.918 [2024-11-20 05:53:29.797974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.798078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.798094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:09.918 [2024-11-20 05:53:29.798104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:42:09.918 [2024-11-20 05:53:29.798112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.810319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.810350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:09.918 [2024-11-20 05:53:29.810359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.177 ms 00:42:09.918 [2024-11-20 05:53:29.810366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.810494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.810508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:09.918 [2024-11-20 05:53:29.810517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:42:09.918 [2024-11-20 05:53:29.810524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.810554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.810567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:09.918 [2024-11-20 05:53:29.810575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:09.918 [2024-11-20 05:53:29.810581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.810604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:42:09.918 [2024-11-20 05:53:29.816006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.816034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:09.918 [2024-11-20 05:53:29.816044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.419 ms 00:42:09.918 [2024-11-20 05:53:29.816066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.816127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.918 [2024-11-20 05:53:29.816139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:09.918 [2024-11-20 05:53:29.816147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:09.918 [2024-11-20 05:53:29.816154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.918 [2024-11-20 05:53:29.816173] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:09.918 [2024-11-20 05:53:29.816214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:09.918 [2024-11-20 05:53:29.816258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:09.918 [2024-11-20 05:53:29.816274] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:09.918 [2024-11-20 05:53:29.816361] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:09.918 [2024-11-20 05:53:29.816371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:09.918 [2024-11-20 05:53:29.816387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:09.919 [2024-11-20 05:53:29.816397] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816411] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816419] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:42:09.919 [2024-11-20 05:53:29.816427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:09.919 [2024-11-20 05:53:29.816434] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:09.919 [2024-11-20 05:53:29.816441] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:09.919 [2024-11-20 05:53:29.816449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.919 [2024-11-20 05:53:29.816457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:09.919 [2024-11-20 05:53:29.816465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:42:09.919 [2024-11-20 05:53:29.816472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.919 [2024-11-20 05:53:29.816541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.919 [2024-11-20 05:53:29.816554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:09.919 [2024-11-20 05:53:29.816561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:42:09.919 [2024-11-20 05:53:29.816568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:09.919 [2024-11-20 05:53:29.816649] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:09.919 [2024-11-20 05:53:29.816659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:09.919 [2024-11-20 05:53:29.816668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:09.919 [2024-11-20 05:53:29.816688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:09.919 [2024-11-20 05:53:29.816711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:09.919 [2024-11-20 05:53:29.816724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:09.919 [2024-11-20 05:53:29.816730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:42:09.919 [2024-11-20 05:53:29.816737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:09.919 [2024-11-20 05:53:29.816754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:09.919 [2024-11-20 05:53:29.816761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:42:09.919 [2024-11-20 05:53:29.816767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:09.919 [2024-11-20 05:53:29.816780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:09.919 [2024-11-20 05:53:29.816799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:09.919 [2024-11-20 05:53:29.816851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:09.919 [2024-11-20 05:53:29.816871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:09.919 [2024-11-20 05:53:29.816890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:09.919 [2024-11-20 05:53:29.816903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:09.919 [2024-11-20 05:53:29.816910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:09.919 [2024-11-20 05:53:29.816922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:09.919 [2024-11-20 05:53:29.816929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:42:09.919 [2024-11-20 05:53:29.816935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:09.919 [2024-11-20 05:53:29.816941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:09.919 [2024-11-20 05:53:29.816948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:42:09.919 [2024-11-20 05:53:29.816954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:09.919 [2024-11-20 05:53:29.816967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:42:09.919 [2024-11-20 05:53:29.816974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.816980] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:09.919 [2024-11-20 05:53:29.816988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:09.919 [2024-11-20 05:53:29.816995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:09.919 [2024-11-20 05:53:29.817005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:09.919 [2024-11-20 05:53:29.817012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:09.919 [2024-11-20 05:53:29.817019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:09.919 [2024-11-20 05:53:29.817026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:09.919 [2024-11-20 05:53:29.817032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:09.919 [2024-11-20 05:53:29.817038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:09.919 [2024-11-20 05:53:29.817045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:09.919 [2024-11-20 05:53:29.817054] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:09.919 [2024-11-20 05:53:29.817063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:42:09.919 [2024-11-20 05:53:29.817083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:42:09.919 [2024-11-20 05:53:29.817089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:42:09.919 [2024-11-20 05:53:29.817097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:42:09.919 [2024-11-20 05:53:29.817104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:42:09.919 [2024-11-20 05:53:29.817111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:42:09.919 [2024-11-20 05:53:29.817118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:42:09.919 [2024-11-20 05:53:29.817124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:42:09.919 [2024-11-20 05:53:29.817131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:42:09.919 [2024-11-20 05:53:29.817138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:42:09.919 [2024-11-20 05:53:29.817170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:09.919 [2024-11-20 05:53:29.817179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:09.919 [2024-11-20 05:53:29.817195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:09.919 [2024-11-20 05:53:29.817203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:09.919 [2024-11-20 05:53:29.817210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:09.919 [2024-11-20 05:53:29.817218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:09.919 [2024-11-20 05:53:29.817226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:09.919 [2024-11-20 05:53:29.817238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:42:09.919 [2024-11-20 05:53:29.817245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.179 [2024-11-20 05:53:29.863042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.179 [2024-11-20 05:53:29.863193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:10.179 [2024-11-20 05:53:29.863225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.824 ms 00:42:10.179 [2024-11-20 05:53:29.863246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.179 [2024-11-20 05:53:29.863467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.179 [2024-11-20 05:53:29.863506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:10.179 [2024-11-20 05:53:29.863564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:42:10.179 [2024-11-20 05:53:29.863585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.179 [2024-11-20 05:53:29.924761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.179 [2024-11-20 05:53:29.924895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:10.179 [2024-11-20 05:53:29.924935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.240 ms 00:42:10.179 [2024-11-20 05:53:29.924958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.179 [2024-11-20 05:53:29.925092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.179 [2024-11-20 05:53:29.925139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:10.179 [2024-11-20 05:53:29.925169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:42:10.179 [2024-11-20 05:53:29.925198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.179 [2024-11-20 05:53:29.926013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.179 [2024-11-20 05:53:29.926060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:10.180 [2024-11-20 05:53:29.926093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:42:10.180 [2024-11-20 05:53:29.926128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:29.926274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:29.926310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:10.180 [2024-11-20 05:53:29.926339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:42:10.180 [2024-11-20 05:53:29.926367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:29.949303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:29.949403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:10.180 [2024-11-20 05:53:29.949432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.942 ms 00:42:10.180 [2024-11-20 05:53:29.949452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:29.968736] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:42:10.180 [2024-11-20 05:53:29.968872] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:10.180 [2024-11-20 05:53:29.968915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:29.968941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:10.180 [2024-11-20 05:53:29.968963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.326 ms 00:42:10.180 [2024-11-20 05:53:29.969000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:29.997705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:29.997799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:10.180 [2024-11-20 05:53:29.997843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.650 ms 00:42:10.180 [2024-11-20 05:53:29.997880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:30.015442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:30.015532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:10.180 [2024-11-20 05:53:30.015560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.490 ms 00:42:10.180 [2024-11-20 05:53:30.015579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:30.032598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:30.032670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:10.180 [2024-11-20 05:53:30.032714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.954 ms 00:42:10.180 [2024-11-20 05:53:30.032734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.180 [2024-11-20 05:53:30.033567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.180 [2024-11-20 05:53:30.033636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:10.180 [2024-11-20 05:53:30.033670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:42:10.180 [2024-11-20 05:53:30.033691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.130551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.130747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:10.440 [2024-11-20 05:53:30.130799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.995 ms 00:42:10.440 [2024-11-20 05:53:30.130828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.142014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:10.440 [2024-11-20 05:53:30.167773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.167941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:10.440 [2024-11-20 05:53:30.167991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.824 ms 00:42:10.440 [2024-11-20 05:53:30.168020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.168199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.168231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:10.440 [2024-11-20 05:53:30.168253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:10.440 [2024-11-20 05:53:30.168286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.168401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.168449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:10.440 [2024-11-20 05:53:30.168479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:42:10.440 [2024-11-20 05:53:30.168501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.168571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.168607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:10.440 [2024-11-20 05:53:30.168636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:42:10.440 [2024-11-20 05:53:30.168664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.168747] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:10.440 [2024-11-20 05:53:30.168788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.168831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:10.440 [2024-11-20 05:53:30.168863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:42:10.440 [2024-11-20 05:53:30.168893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.204076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.204202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:10.440 [2024-11-20 05:53:30.204219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.186 ms 00:42:10.440 [2024-11-20 05:53:30.204228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.204360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.440 [2024-11-20 05:53:30.204372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:10.440 [2024-11-20 05:53:30.204381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:42:10.440 [2024-11-20 05:53:30.204389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.440 [2024-11-20 05:53:30.205778] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:10.440 [2024-11-20 05:53:30.209872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 433.875 ms, result 0 00:42:10.440 [2024-11-20 05:53:30.210741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:10.440 [2024-11-20 05:53:30.228292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:10.701  [2024-11-20T05:53:30.620Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-11-20 05:53:30.379892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:10.701 [2024-11-20 05:53:30.393987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.394022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:10.701 [2024-11-20 05:53:30.394049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:10.701 [2024-11-20 05:53:30.394062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.394081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:42:10.701 [2024-11-20 05:53:30.398860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.398888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:10.701 [2024-11-20 05:53:30.398899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:42:10.701 [2024-11-20 05:53:30.398907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.400701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.400734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:10.701 [2024-11-20 05:53:30.400745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:42:10.701 [2024-11-20 05:53:30.400752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.403919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.403950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:10.701 [2024-11-20 05:53:30.403959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:42:10.701 [2024-11-20 05:53:30.403966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.409225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.409272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:10.701 [2024-11-20 05:53:30.409282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.233 ms 00:42:10.701 [2024-11-20 05:53:30.409289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.441862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.441893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:10.701 [2024-11-20 05:53:30.441903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.587 ms 00:42:10.701 [2024-11-20 05:53:30.441910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.462048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.462086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:10.701 [2024-11-20 05:53:30.462100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.109 ms 00:42:10.701 [2024-11-20 05:53:30.462107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.462225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.462235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:10.701 [2024-11-20 05:53:30.462243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:42:10.701 [2024-11-20 05:53:30.462250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.496256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.496285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:10.701 [2024-11-20 05:53:30.496295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.043 ms 00:42:10.701 [2024-11-20 05:53:30.496317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.529330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.529358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:10.701 [2024-11-20 05:53:30.529367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.032 ms 00:42:10.701 [2024-11-20 05:53:30.529390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.562012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.562042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:10.701 [2024-11-20 05:53:30.562051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.641 ms 00:42:10.701 [2024-11-20 05:53:30.562074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.594236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.701 [2024-11-20 05:53:30.594266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:10.701 [2024-11-20 05:53:30.594275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.153 ms 00:42:10.701 [2024-11-20 05:53:30.594281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.701 [2024-11-20 05:53:30.594323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:10.701 [2024-11-20 05:53:30.594337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:10.701 [2024-11-20 05:53:30.594530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.594994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:10.702 [2024-11-20 05:53:30.595064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:10.702 [2024-11-20 05:53:30.595072] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:42:10.702 [2024-11-20 05:53:30.595080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:10.702 [2024-11-20 05:53:30.595087] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:10.702 [2024-11-20 05:53:30.595094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:10.702 [2024-11-20 05:53:30.595102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:10.702 [2024-11-20 05:53:30.595108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:10.702 [2024-11-20 05:53:30.595115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:10.702 [2024-11-20 05:53:30.595122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:10.702 [2024-11-20 05:53:30.595128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:10.702 [2024-11-20 05:53:30.595134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:10.702 [2024-11-20 05:53:30.595141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.702 [2024-11-20 05:53:30.595153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:10.702 [2024-11-20 05:53:30.595161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:42:10.702 [2024-11-20 05:53:30.595167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.702 [2024-11-20 05:53:30.615552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.702 [2024-11-20 05:53:30.615579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:10.702 [2024-11-20 05:53:30.615588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.405 ms 00:42:10.702 [2024-11-20 05:53:30.615596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.702 [2024-11-20 05:53:30.616209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:10.702 [2024-11-20 05:53:30.616227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:10.702 [2024-11-20 05:53:30.616235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:42:10.703 [2024-11-20 05:53:30.616242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.962 [2024-11-20 05:53:30.670873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:10.962 [2024-11-20 05:53:30.670907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:10.962 [2024-11-20 05:53:30.670933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:10.962 [2024-11-20 05:53:30.670941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.962 [2024-11-20 05:53:30.671036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:10.962 [2024-11-20 05:53:30.671046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:10.962 [2024-11-20 05:53:30.671062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:10.962 [2024-11-20 05:53:30.671069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.962 [2024-11-20 05:53:30.671144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:10.962 [2024-11-20 05:53:30.671155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:10.962 [2024-11-20 05:53:30.671163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:10.962 [2024-11-20 05:53:30.671170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.962 [2024-11-20 05:53:30.671188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:10.962 [2024-11-20 05:53:30.671201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:10.962 [2024-11-20 05:53:30.671209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:10.962 [2024-11-20 05:53:30.671216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:10.962 [2024-11-20 05:53:30.797298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:10.962 [2024-11-20 05:53:30.797371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:10.962 [2024-11-20 05:53:30.797384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:10.962 [2024-11-20 05:53:30.797392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:11.222 [2024-11-20 05:53:30.895445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:11.222 [2024-11-20 05:53:30.895553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:11.222 [2024-11-20 05:53:30.895613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:11.222 [2024-11-20 05:53:30.895744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:11.222 [2024-11-20 05:53:30.895828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:11.222 [2024-11-20 05:53:30.895912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.895969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:11.222 [2024-11-20 05:53:30.895979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:11.222 [2024-11-20 05:53:30.895990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:11.222 [2024-11-20 05:53:30.895997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:11.222 [2024-11-20 05:53:30.896152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.115 ms, result 0 00:42:12.160 00:42:12.160 00:42:12.160 05:53:32 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:42:12.160 05:53:32 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77169 00:42:12.160 05:53:32 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77169 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 77169 ']' 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:12.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:12.160 05:53:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:42:12.419 [2024-11-20 05:53:32.120573] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:12.419 [2024-11-20 05:53:32.120705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77169 ] 00:42:12.419 [2024-11-20 05:53:32.300451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:12.678 [2024-11-20 05:53:32.437505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:13.616 05:53:33 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:13.616 05:53:33 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:42:13.617 05:53:33 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:42:13.875 [2024-11-20 05:53:33.662078] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:13.875 [2024-11-20 05:53:33.662163] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:14.135 [2024-11-20 05:53:33.839950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.840008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:14.135 [2024-11-20 05:53:33.840028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:14.135 [2024-11-20 05:53:33.840037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.843700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.843743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:14.135 [2024-11-20 05:53:33.843755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:42:14.135 [2024-11-20 05:53:33.843762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.843867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:14.135 [2024-11-20 05:53:33.844745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:14.135 [2024-11-20 05:53:33.844779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.844787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:14.135 [2024-11-20 05:53:33.844797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:42:14.135 [2024-11-20 05:53:33.844816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.847250] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:14.135 [2024-11-20 05:53:33.868065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.868110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:14.135 [2024-11-20 05:53:33.868122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.863 ms 00:42:14.135 [2024-11-20 05:53:33.868133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.868226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.868241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:14.135 [2024-11-20 05:53:33.868249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:42:14.135 [2024-11-20 05:53:33.868259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.880426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.880473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:14.135 [2024-11-20 05:53:33.880484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:42:14.135 [2024-11-20 05:53:33.880494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.880630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.880646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:14.135 [2024-11-20 05:53:33.880656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:42:14.135 [2024-11-20 05:53:33.880667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.880701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.880712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:14.135 [2024-11-20 05:53:33.880719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:14.135 [2024-11-20 05:53:33.880729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.880754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:42:14.135 [2024-11-20 05:53:33.886263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.886297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:14.135 [2024-11-20 05:53:33.886309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.525 ms 00:42:14.135 [2024-11-20 05:53:33.886317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.886373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.135 [2024-11-20 05:53:33.886383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:14.135 [2024-11-20 05:53:33.886394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:14.135 [2024-11-20 05:53:33.886404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.135 [2024-11-20 05:53:33.886428] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:14.135 [2024-11-20 05:53:33.886458] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:14.135 [2024-11-20 05:53:33.886505] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:14.135 [2024-11-20 05:53:33.886524] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:14.135 [2024-11-20 05:53:33.886619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:14.135 [2024-11-20 05:53:33.886645] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:14.135 [2024-11-20 05:53:33.886662] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:14.135 [2024-11-20 05:53:33.886673] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:14.135 [2024-11-20 05:53:33.886684] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:14.135 [2024-11-20 05:53:33.886692] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:42:14.135 [2024-11-20 05:53:33.886702] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:14.136 [2024-11-20 05:53:33.886708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:14.136 [2024-11-20 05:53:33.886721] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:14.136 [2024-11-20 05:53:33.886729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.136 [2024-11-20 05:53:33.886739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:14.136 [2024-11-20 05:53:33.886747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:42:14.136 [2024-11-20 05:53:33.886756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.136 [2024-11-20 05:53:33.886845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.136 [2024-11-20 05:53:33.886857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:14.136 [2024-11-20 05:53:33.886865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:42:14.136 [2024-11-20 05:53:33.886875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.136 [2024-11-20 05:53:33.886970] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:14.136 [2024-11-20 05:53:33.886988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:14.136 [2024-11-20 05:53:33.886996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:14.136 [2024-11-20 05:53:33.887023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:14.136 [2024-11-20 05:53:33.887051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:14.136 [2024-11-20 05:53:33.887067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:14.136 [2024-11-20 05:53:33.887076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:42:14.136 [2024-11-20 05:53:33.887082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:14.136 [2024-11-20 05:53:33.887091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:14.136 [2024-11-20 05:53:33.887098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:42:14.136 [2024-11-20 05:53:33.887106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:14.136 [2024-11-20 05:53:33.887130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:14.136 [2024-11-20 05:53:33.887169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:14.136 [2024-11-20 05:53:33.887203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:14.136 [2024-11-20 05:53:33.887227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:14.136 [2024-11-20 05:53:33.887257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:14.136 [2024-11-20 05:53:33.887282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:14.136 [2024-11-20 05:53:33.887301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:14.136 [2024-11-20 05:53:33.887312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:42:14.136 [2024-11-20 05:53:33.887319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:14.136 [2024-11-20 05:53:33.887330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:14.136 [2024-11-20 05:53:33.887337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:42:14.136 [2024-11-20 05:53:33.887353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:14.136 [2024-11-20 05:53:33.887370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:42:14.136 [2024-11-20 05:53:33.887377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887387] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:14.136 [2024-11-20 05:53:33.887400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:14.136 [2024-11-20 05:53:33.887412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:14.136 [2024-11-20 05:53:33.887433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:14.136 [2024-11-20 05:53:33.887440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:14.136 [2024-11-20 05:53:33.887451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:14.136 [2024-11-20 05:53:33.887458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:14.136 [2024-11-20 05:53:33.887469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:14.136 [2024-11-20 05:53:33.887476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:14.136 [2024-11-20 05:53:33.887489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:14.136 [2024-11-20 05:53:33.887500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:42:14.136 [2024-11-20 05:53:33.887527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:42:14.136 [2024-11-20 05:53:33.887538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:42:14.136 [2024-11-20 05:53:33.887546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:42:14.136 [2024-11-20 05:53:33.887558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:42:14.136 [2024-11-20 05:53:33.887565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:42:14.136 [2024-11-20 05:53:33.887576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:42:14.136 [2024-11-20 05:53:33.887583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:42:14.136 [2024-11-20 05:53:33.887595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:42:14.136 [2024-11-20 05:53:33.887602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:42:14.136 [2024-11-20 05:53:33.887649] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:14.136 [2024-11-20 05:53:33.887657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:14.136 [2024-11-20 05:53:33.887682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:14.136 [2024-11-20 05:53:33.887693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:14.136 [2024-11-20 05:53:33.887701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:14.136 [2024-11-20 05:53:33.887713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.136 [2024-11-20 05:53:33.887721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:14.136 [2024-11-20 05:53:33.887732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:42:14.136 [2024-11-20 05:53:33.887739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.136 [2024-11-20 05:53:33.938650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.136 [2024-11-20 05:53:33.938700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:14.136 [2024-11-20 05:53:33.938720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.925 ms 00:42:14.137 [2024-11-20 05:53:33.938734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:33.938909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:33.938921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:14.137 [2024-11-20 05:53:33.938935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:42:14.137 [2024-11-20 05:53:33.938943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:33.993704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:33.993758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:14.137 [2024-11-20 05:53:33.993775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.833 ms 00:42:14.137 [2024-11-20 05:53:33.993783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:33.993885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:33.993899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:14.137 [2024-11-20 05:53:33.993912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:14.137 [2024-11-20 05:53:33.993920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:33.994680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:33.994698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:14.137 [2024-11-20 05:53:33.994718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:42:14.137 [2024-11-20 05:53:33.994726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:33.994868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:33.994887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:14.137 [2024-11-20 05:53:33.994900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:42:14.137 [2024-11-20 05:53:33.994908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.137 [2024-11-20 05:53:34.022075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.137 [2024-11-20 05:53:34.022117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:14.137 [2024-11-20 05:53:34.022134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.186 ms 00:42:14.137 [2024-11-20 05:53:34.022142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.054519] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:14.397 [2024-11-20 05:53:34.054560] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:14.397 [2024-11-20 05:53:34.054578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.054587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:14.397 [2024-11-20 05:53:34.054601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.363 ms 00:42:14.397 [2024-11-20 05:53:34.054609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.083133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.083171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:14.397 [2024-11-20 05:53:34.083187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.492 ms 00:42:14.397 [2024-11-20 05:53:34.083195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.100186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.100222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:14.397 [2024-11-20 05:53:34.100241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.920 ms 00:42:14.397 [2024-11-20 05:53:34.100248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.116984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.117018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:14.397 [2024-11-20 05:53:34.117032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.676 ms 00:42:14.397 [2024-11-20 05:53:34.117039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.117826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.117856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:14.397 [2024-11-20 05:53:34.117871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:42:14.397 [2024-11-20 05:53:34.117878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.210115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.210201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:14.397 [2024-11-20 05:53:34.210239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.376 ms 00:42:14.397 [2024-11-20 05:53:34.210248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.220956] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:14.397 [2024-11-20 05:53:34.246307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.246427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:14.397 [2024-11-20 05:53:34.246449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.022 ms 00:42:14.397 [2024-11-20 05:53:34.246461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.246637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.246655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:14.397 [2024-11-20 05:53:34.246664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:14.397 [2024-11-20 05:53:34.246676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.246744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.246762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:14.397 [2024-11-20 05:53:34.246770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:42:14.397 [2024-11-20 05:53:34.246789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.246852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.246867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:14.397 [2024-11-20 05:53:34.246876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:14.397 [2024-11-20 05:53:34.246892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.246933] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:14.397 [2024-11-20 05:53:34.246953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.246961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:14.397 [2024-11-20 05:53:34.246981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:14.397 [2024-11-20 05:53:34.246988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.282551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.282593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:14.397 [2024-11-20 05:53:34.282626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.593 ms 00:42:14.397 [2024-11-20 05:53:34.282635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.282753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.397 [2024-11-20 05:53:34.282764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:14.397 [2024-11-20 05:53:34.282777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:42:14.397 [2024-11-20 05:53:34.282790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.397 [2024-11-20 05:53:34.284187] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:14.397 [2024-11-20 05:53:34.288138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 444.685 ms, result 0 00:42:14.397 [2024-11-20 05:53:34.289867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:14.397 Some configs were skipped because the RPC state that can call them passed over. 00:42:14.657 05:53:34 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:42:14.657 [2024-11-20 05:53:34.532745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.657 [2024-11-20 05:53:34.532831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:42:14.657 [2024-11-20 05:53:34.532849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.721 ms 00:42:14.657 [2024-11-20 05:53:34.532862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.657 [2024-11-20 05:53:34.532903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.890 ms, result 0 00:42:14.657 true 00:42:14.657 05:53:34 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:42:14.917 [2024-11-20 05:53:34.708298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:14.917 [2024-11-20 05:53:34.708363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:42:14.917 [2024-11-20 05:53:34.708382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:42:14.917 [2024-11-20 05:53:34.708392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:14.917 [2024-11-20 05:53:34.708437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.473 ms, result 0 00:42:14.917 true 00:42:14.917 05:53:34 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77169 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 77169 ']' 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 77169 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77169 00:42:14.917 killing process with pid 77169 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77169' 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 77169 00:42:14.917 05:53:34 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 77169 00:42:16.299 [2024-11-20 05:53:35.906294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.299 [2024-11-20 05:53:35.906377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:16.299 [2024-11-20 05:53:35.906392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:16.299 [2024-11-20 05:53:35.906401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.299 [2024-11-20 05:53:35.906426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:42:16.299 [2024-11-20 05:53:35.911030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.299 [2024-11-20 05:53:35.911063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:16.299 [2024-11-20 05:53:35.911093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:42:16.299 [2024-11-20 05:53:35.911100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.299 [2024-11-20 05:53:35.911362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.299 [2024-11-20 05:53:35.911380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:16.300 [2024-11-20 05:53:35.911390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:42:16.300 [2024-11-20 05:53:35.911413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.914785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.914828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:16.300 [2024-11-20 05:53:35.914843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:42:16.300 [2024-11-20 05:53:35.914851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.920421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.920461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:16.300 [2024-11-20 05:53:35.920489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.539 ms 00:42:16.300 [2024-11-20 05:53:35.920496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.934515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.934551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:16.300 [2024-11-20 05:53:35.934581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:42:16.300 [2024-11-20 05:53:35.934600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.945361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.945402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:16.300 [2024-11-20 05:53:35.945414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.724 ms 00:42:16.300 [2024-11-20 05:53:35.945422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.945588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.945599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:16.300 [2024-11-20 05:53:35.945610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:42:16.300 [2024-11-20 05:53:35.945618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.960294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.960328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:16.300 [2024-11-20 05:53:35.960340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.682 ms 00:42:16.300 [2024-11-20 05:53:35.960346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.974698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.974744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:16.300 [2024-11-20 05:53:35.974780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.310 ms 00:42:16.300 [2024-11-20 05:53:35.974787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:35.987976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:35.988007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:16.300 [2024-11-20 05:53:35.988024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.156 ms 00:42:16.300 [2024-11-20 05:53:35.988031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:36.001785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.300 [2024-11-20 05:53:36.001829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:16.300 [2024-11-20 05:53:36.001845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.684 ms 00:42:16.300 [2024-11-20 05:53:36.001868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.300 [2024-11-20 05:53:36.001935] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:16.300 [2024-11-20 05:53:36.001950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.001966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.001974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.001986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.001993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:16.300 [2024-11-20 05:53:36.002396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:16.301 [2024-11-20 05:53:36.002975] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:16.301 [2024-11-20 05:53:36.002997] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:42:16.301 [2024-11-20 05:53:36.003019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:16.301 [2024-11-20 05:53:36.003038] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:16.301 [2024-11-20 05:53:36.003045] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:16.301 [2024-11-20 05:53:36.003057] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:16.301 [2024-11-20 05:53:36.003064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:16.301 [2024-11-20 05:53:36.003076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:16.301 [2024-11-20 05:53:36.003083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:16.301 [2024-11-20 05:53:36.003094] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:16.301 [2024-11-20 05:53:36.003101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:16.301 [2024-11-20 05:53:36.003112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.301 [2024-11-20 05:53:36.003121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:16.301 [2024-11-20 05:53:36.003134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:42:16.301 [2024-11-20 05:53:36.003141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.022896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.301 [2024-11-20 05:53:36.022928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:16.301 [2024-11-20 05:53:36.022963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.756 ms 00:42:16.301 [2024-11-20 05:53:36.022971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.023616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.301 [2024-11-20 05:53:36.023636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:16.301 [2024-11-20 05:53:36.023649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:42:16.301 [2024-11-20 05:53:36.023661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.093323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.301 [2024-11-20 05:53:36.093363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:16.301 [2024-11-20 05:53:36.093394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.301 [2024-11-20 05:53:36.093403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.093502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.301 [2024-11-20 05:53:36.093529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:16.301 [2024-11-20 05:53:36.093542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.301 [2024-11-20 05:53:36.093556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.093619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.301 [2024-11-20 05:53:36.093631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:16.301 [2024-11-20 05:53:36.093648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.301 [2024-11-20 05:53:36.093656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.301 [2024-11-20 05:53:36.093680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.302 [2024-11-20 05:53:36.093689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:16.302 [2024-11-20 05:53:36.093701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.302 [2024-11-20 05:53:36.093708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.221761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.221861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:16.562 [2024-11-20 05:53:36.221882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.221891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.321360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:16.562 [2024-11-20 05:53:36.321379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.321394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.321530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:16.562 [2024-11-20 05:53:36.321548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.321556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.321599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:16.562 [2024-11-20 05:53:36.321611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.321619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.321779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:16.562 [2024-11-20 05:53:36.321792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.321800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.321914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:16.562 [2024-11-20 05:53:36.321927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.321935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.321990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.322000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:16.562 [2024-11-20 05:53:36.322016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.322024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.322079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:16.562 [2024-11-20 05:53:36.322089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:16.562 [2024-11-20 05:53:36.322101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:16.562 [2024-11-20 05:53:36.322109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.562 [2024-11-20 05:53:36.322280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.750 ms, result 0 00:42:17.527 05:53:37 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:17.786 [2024-11-20 05:53:37.493833] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:17.786 [2024-11-20 05:53:37.493983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77239 ] 00:42:17.786 [2024-11-20 05:53:37.672094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.046 [2024-11-20 05:53:37.805748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:18.306 [2024-11-20 05:53:38.219609] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:18.306 [2024-11-20 05:53:38.219687] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:18.566 [2024-11-20 05:53:38.381262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.381323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:18.566 [2024-11-20 05:53:38.381338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:18.566 [2024-11-20 05:53:38.381372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.566 [2024-11-20 05:53:38.384541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.384581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:18.566 [2024-11-20 05:53:38.384608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:42:18.566 [2024-11-20 05:53:38.384616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.566 [2024-11-20 05:53:38.384701] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:18.566 [2024-11-20 05:53:38.385695] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:18.566 [2024-11-20 05:53:38.385733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.385742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:18.566 [2024-11-20 05:53:38.385751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:42:18.566 [2024-11-20 05:53:38.385759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.566 [2024-11-20 05:53:38.388269] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:18.566 [2024-11-20 05:53:38.407582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.407625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:18.566 [2024-11-20 05:53:38.407653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.352 ms 00:42:18.566 [2024-11-20 05:53:38.407662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.566 [2024-11-20 05:53:38.407752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.407764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:18.566 [2024-11-20 05:53:38.407773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:42:18.566 [2024-11-20 05:53:38.407781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.566 [2024-11-20 05:53:38.420152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.566 [2024-11-20 05:53:38.420186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:18.566 [2024-11-20 05:53:38.420212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.337 ms 00:42:18.566 [2024-11-20 05:53:38.420220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.420333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.420346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:18.567 [2024-11-20 05:53:38.420355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:42:18.567 [2024-11-20 05:53:38.420363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.420392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.420405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:18.567 [2024-11-20 05:53:38.420413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:42:18.567 [2024-11-20 05:53:38.420421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.420444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:42:18.567 [2024-11-20 05:53:38.426182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.426215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:18.567 [2024-11-20 05:53:38.426224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.756 ms 00:42:18.567 [2024-11-20 05:53:38.426232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.426297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.426307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:18.567 [2024-11-20 05:53:38.426315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:18.567 [2024-11-20 05:53:38.426322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.426340] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:18.567 [2024-11-20 05:53:38.426365] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:18.567 [2024-11-20 05:53:38.426402] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:18.567 [2024-11-20 05:53:38.426418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:18.567 [2024-11-20 05:53:38.426509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:18.567 [2024-11-20 05:53:38.426523] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:18.567 [2024-11-20 05:53:38.426535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:18.567 [2024-11-20 05:53:38.426544] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426558] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426567] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:42:18.567 [2024-11-20 05:53:38.426575] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:18.567 [2024-11-20 05:53:38.426583] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:18.567 [2024-11-20 05:53:38.426590] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:18.567 [2024-11-20 05:53:38.426598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.426606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:18.567 [2024-11-20 05:53:38.426613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:42:18.567 [2024-11-20 05:53:38.426620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.426693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.567 [2024-11-20 05:53:38.426706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:18.567 [2024-11-20 05:53:38.426714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:42:18.567 [2024-11-20 05:53:38.426721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.567 [2024-11-20 05:53:38.426816] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:18.567 [2024-11-20 05:53:38.426829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:18.567 [2024-11-20 05:53:38.426837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:18.567 [2024-11-20 05:53:38.426859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:18.567 [2024-11-20 05:53:38.426883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:18.567 [2024-11-20 05:53:38.426897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:18.567 [2024-11-20 05:53:38.426904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:42:18.567 [2024-11-20 05:53:38.426911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:18.567 [2024-11-20 05:53:38.426932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:18.567 [2024-11-20 05:53:38.426939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:42:18.567 [2024-11-20 05:53:38.426946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:18.567 [2024-11-20 05:53:38.426960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:18.567 [2024-11-20 05:53:38.426980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:42:18.567 [2024-11-20 05:53:38.426987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:18.567 [2024-11-20 05:53:38.426993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:18.567 [2024-11-20 05:53:38.426999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:18.567 [2024-11-20 05:53:38.427011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:18.567 [2024-11-20 05:53:38.427017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:18.567 [2024-11-20 05:53:38.427030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:18.567 [2024-11-20 05:53:38.427036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:18.567 [2024-11-20 05:53:38.427049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:18.567 [2024-11-20 05:53:38.427055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:18.567 [2024-11-20 05:53:38.427069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:18.567 [2024-11-20 05:53:38.427075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:42:18.567 [2024-11-20 05:53:38.427081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:18.567 [2024-11-20 05:53:38.427087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:18.567 [2024-11-20 05:53:38.427094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:42:18.567 [2024-11-20 05:53:38.427101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:18.567 [2024-11-20 05:53:38.427114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:42:18.567 [2024-11-20 05:53:38.427120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427126] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:18.567 [2024-11-20 05:53:38.427133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:18.567 [2024-11-20 05:53:38.427140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:18.567 [2024-11-20 05:53:38.427151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:18.567 [2024-11-20 05:53:38.427159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:18.567 [2024-11-20 05:53:38.427167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:18.567 [2024-11-20 05:53:38.427173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:18.567 [2024-11-20 05:53:38.427180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:18.567 [2024-11-20 05:53:38.427186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:18.567 [2024-11-20 05:53:38.427193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:18.568 [2024-11-20 05:53:38.427202] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:18.568 [2024-11-20 05:53:38.427211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:42:18.568 [2024-11-20 05:53:38.427227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:42:18.568 [2024-11-20 05:53:38.427234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:42:18.568 [2024-11-20 05:53:38.427241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:42:18.568 [2024-11-20 05:53:38.427249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:42:18.568 [2024-11-20 05:53:38.427256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:42:18.568 [2024-11-20 05:53:38.427263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:42:18.568 [2024-11-20 05:53:38.427270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:42:18.568 [2024-11-20 05:53:38.427278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:42:18.568 [2024-11-20 05:53:38.427285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:42:18.568 [2024-11-20 05:53:38.427319] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:18.568 [2024-11-20 05:53:38.427328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:18.568 [2024-11-20 05:53:38.427344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:18.568 [2024-11-20 05:53:38.427351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:18.568 [2024-11-20 05:53:38.427359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:18.568 [2024-11-20 05:53:38.427367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.568 [2024-11-20 05:53:38.427375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:18.568 [2024-11-20 05:53:38.427386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:42:18.568 [2024-11-20 05:53:38.427394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.568 [2024-11-20 05:53:38.475269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.568 [2024-11-20 05:53:38.475319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:18.568 [2024-11-20 05:53:38.475331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.908 ms 00:42:18.568 [2024-11-20 05:53:38.475355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.568 [2024-11-20 05:53:38.475530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.568 [2024-11-20 05:53:38.475542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:18.568 [2024-11-20 05:53:38.475551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:42:18.568 [2024-11-20 05:53:38.475558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.539932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.539971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:18.828 [2024-11-20 05:53:38.539986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.473 ms 00:42:18.828 [2024-11-20 05:53:38.539995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.540090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.540101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:18.828 [2024-11-20 05:53:38.540110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:18.828 [2024-11-20 05:53:38.540118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.540915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.540935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:18.828 [2024-11-20 05:53:38.540945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:42:18.828 [2024-11-20 05:53:38.540960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.541092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.541113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:18.828 [2024-11-20 05:53:38.541122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:42:18.828 [2024-11-20 05:53:38.541132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.564223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.564261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:18.828 [2024-11-20 05:53:38.564273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.109 ms 00:42:18.828 [2024-11-20 05:53:38.564281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.584089] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:42:18.828 [2024-11-20 05:53:38.584129] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:18.828 [2024-11-20 05:53:38.584158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.584166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:18.828 [2024-11-20 05:53:38.584175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.775 ms 00:42:18.828 [2024-11-20 05:53:38.584182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.612311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.612366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:18.828 [2024-11-20 05:53:38.612394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.087 ms 00:42:18.828 [2024-11-20 05:53:38.612403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.629405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.629443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:18.828 [2024-11-20 05:53:38.629454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.955 ms 00:42:18.828 [2024-11-20 05:53:38.629461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.828 [2024-11-20 05:53:38.646526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.828 [2024-11-20 05:53:38.646563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:18.829 [2024-11-20 05:53:38.646589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.008 ms 00:42:18.829 [2024-11-20 05:53:38.646596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.829 [2024-11-20 05:53:38.647385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.829 [2024-11-20 05:53:38.647417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:18.829 [2024-11-20 05:53:38.647428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:42:18.829 [2024-11-20 05:53:38.647436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:18.829 [2024-11-20 05:53:38.741523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:18.829 [2024-11-20 05:53:38.741628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:18.829 [2024-11-20 05:53:38.741645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.236 ms 00:42:18.829 [2024-11-20 05:53:38.741655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.088 [2024-11-20 05:53:38.752253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:42:19.088 [2024-11-20 05:53:38.777921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.088 [2024-11-20 05:53:38.778012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:19.088 [2024-11-20 05:53:38.778029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.197 ms 00:42:19.088 [2024-11-20 05:53:38.778047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.088 [2024-11-20 05:53:38.778195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.778208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:19.089 [2024-11-20 05:53:38.778217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:19.089 [2024-11-20 05:53:38.778225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.778293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.778317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:19.089 [2024-11-20 05:53:38.778326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:42:19.089 [2024-11-20 05:53:38.778333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.778385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.778405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:19.089 [2024-11-20 05:53:38.778414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:42:19.089 [2024-11-20 05:53:38.778421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.778465] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:19.089 [2024-11-20 05:53:38.778475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.778483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:19.089 [2024-11-20 05:53:38.778491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:19.089 [2024-11-20 05:53:38.778498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.815192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.815235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:19.089 [2024-11-20 05:53:38.815263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.742 ms 00:42:19.089 [2024-11-20 05:53:38.815272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.815391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:19.089 [2024-11-20 05:53:38.815403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:19.089 [2024-11-20 05:53:38.815412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:42:19.089 [2024-11-20 05:53:38.815419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:19.089 [2024-11-20 05:53:38.816728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:19.089 [2024-11-20 05:53:38.820944] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 435.976 ms, result 0 00:42:19.089 [2024-11-20 05:53:38.821925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:19.089 [2024-11-20 05:53:38.839713] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:20.055  [2024-11-20T05:53:40.915Z] Copying: 32/256 [MB] (32 MBps) [2024-11-20T05:53:42.295Z] Copying: 62/256 [MB] (29 MBps) [2024-11-20T05:53:43.233Z] Copying: 92/256 [MB] (30 MBps) [2024-11-20T05:53:44.172Z] Copying: 123/256 [MB] (30 MBps) [2024-11-20T05:53:45.110Z] Copying: 153/256 [MB] (29 MBps) [2024-11-20T05:53:46.049Z] Copying: 183/256 [MB] (29 MBps) [2024-11-20T05:53:46.988Z] Copying: 213/256 [MB] (30 MBps) [2024-11-20T05:53:47.556Z] Copying: 244/256 [MB] (30 MBps) [2024-11-20T05:53:47.817Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-20 05:53:47.677606] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:27.898 [2024-11-20 05:53:47.714826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.714895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:27.898 [2024-11-20 05:53:47.714922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:27.898 [2024-11-20 05:53:47.714953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.715010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:42:27.898 [2024-11-20 05:53:47.720039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.720084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:27.898 [2024-11-20 05:53:47.720107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.007 ms 00:42:27.898 [2024-11-20 05:53:47.720119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.720456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.720485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:27.898 [2024-11-20 05:53:47.720499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:42:27.898 [2024-11-20 05:53:47.720513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.723514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.723556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:27.898 [2024-11-20 05:53:47.723570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:42:27.898 [2024-11-20 05:53:47.723583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.729383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.729419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:27.898 [2024-11-20 05:53:47.729434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.772 ms 00:42:27.898 [2024-11-20 05:53:47.729445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.764505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.764547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:27.898 [2024-11-20 05:53:47.764578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.023 ms 00:42:27.898 [2024-11-20 05:53:47.764589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.784532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.784577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:27.898 [2024-11-20 05:53:47.784614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.914 ms 00:42:27.898 [2024-11-20 05:53:47.784625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:27.898 [2024-11-20 05:53:47.784839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:27.898 [2024-11-20 05:53:47.784861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:27.898 [2024-11-20 05:53:47.784876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:42:27.898 [2024-11-20 05:53:47.784888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.162 [2024-11-20 05:53:47.819660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.162 [2024-11-20 05:53:47.819698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:28.162 [2024-11-20 05:53:47.819729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.794 ms 00:42:28.162 [2024-11-20 05:53:47.819740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.162 [2024-11-20 05:53:47.855123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.162 [2024-11-20 05:53:47.855162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:28.162 [2024-11-20 05:53:47.855192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.389 ms 00:42:28.162 [2024-11-20 05:53:47.855202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.162 [2024-11-20 05:53:47.888591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.162 [2024-11-20 05:53:47.888628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:28.162 [2024-11-20 05:53:47.888659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.391 ms 00:42:28.162 [2024-11-20 05:53:47.888667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.162 [2024-11-20 05:53:47.922626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.162 [2024-11-20 05:53:47.922664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:28.162 [2024-11-20 05:53:47.922694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.901 ms 00:42:28.162 [2024-11-20 05:53:47.922704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.162 [2024-11-20 05:53:47.922763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:28.162 [2024-11-20 05:53:47.922787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.922994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:28.162 [2024-11-20 05:53:47.923200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.923997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.924009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.924021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:28.163 [2024-11-20 05:53:47.924046] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:28.163 [2024-11-20 05:53:47.924058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2028fbc6-7764-4261-8bfa-c9609e66672d 00:42:28.163 [2024-11-20 05:53:47.924070] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:28.163 [2024-11-20 05:53:47.924081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:28.163 [2024-11-20 05:53:47.924092] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:28.163 [2024-11-20 05:53:47.924104] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:28.163 [2024-11-20 05:53:47.924116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:28.163 [2024-11-20 05:53:47.924128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:28.163 [2024-11-20 05:53:47.924140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:28.163 [2024-11-20 05:53:47.924150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:28.163 [2024-11-20 05:53:47.924160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:28.163 [2024-11-20 05:53:47.924172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.163 [2024-11-20 05:53:47.924190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:28.163 [2024-11-20 05:53:47.924203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:42:28.163 [2024-11-20 05:53:47.924215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.163 [2024-11-20 05:53:47.944831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.163 [2024-11-20 05:53:47.944866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:28.163 [2024-11-20 05:53:47.944897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.623 ms 00:42:28.163 [2024-11-20 05:53:47.944908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.163 [2024-11-20 05:53:47.945536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:28.163 [2024-11-20 05:53:47.945563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:28.163 [2024-11-20 05:53:47.945577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:42:28.163 [2024-11-20 05:53:47.945589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.163 [2024-11-20 05:53:48.001492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.163 [2024-11-20 05:53:48.001533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:28.163 [2024-11-20 05:53:48.001548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.163 [2024-11-20 05:53:48.001559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.163 [2024-11-20 05:53:48.001701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.164 [2024-11-20 05:53:48.001717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:28.164 [2024-11-20 05:53:48.001731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.164 [2024-11-20 05:53:48.001742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.164 [2024-11-20 05:53:48.001843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.164 [2024-11-20 05:53:48.001862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:28.164 [2024-11-20 05:53:48.001875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.164 [2024-11-20 05:53:48.001887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.164 [2024-11-20 05:53:48.001916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.164 [2024-11-20 05:53:48.001934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:28.164 [2024-11-20 05:53:48.001947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.164 [2024-11-20 05:53:48.001959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.129858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.129938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:28.424 [2024-11-20 05:53:48.129957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.129969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:28.424 [2024-11-20 05:53:48.232129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:28.424 [2024-11-20 05:53:48.232300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:28.424 [2024-11-20 05:53:48.232380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:28.424 [2024-11-20 05:53:48.232594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:28.424 [2024-11-20 05:53:48.232697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:28.424 [2024-11-20 05:53:48.232828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.232920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:28.424 [2024-11-20 05:53:48.232939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:28.424 [2024-11-20 05:53:48.232957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:28.424 [2024-11-20 05:53:48.232969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:28.424 [2024-11-20 05:53:48.233181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.364 ms, result 0 00:42:29.804 00:42:29.804 00:42:29.804 05:53:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:30.064 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:42:30.064 05:53:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77169 00:42:30.064 05:53:49 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 77169 ']' 00:42:30.064 05:53:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 77169 00:42:30.064 Process with pid 77169 is not found 00:42:30.064 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77169) - No such process 00:42:30.064 05:53:49 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 77169 is not found' 00:42:30.064 ************************************ 00:42:30.064 END TEST ftl_trim 00:42:30.064 ************************************ 00:42:30.064 00:42:30.064 real 1m9.830s 00:42:30.064 user 1m40.368s 00:42:30.064 sys 0m7.693s 00:42:30.064 05:53:49 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:42:30.064 05:53:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:42:30.064 05:53:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:42:30.064 05:53:49 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:42:30.064 05:53:49 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:42:30.064 05:53:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:30.064 ************************************ 00:42:30.064 START TEST ftl_restore 00:42:30.064 ************************************ 00:42:30.064 05:53:49 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:42:30.324 * Looking for test storage... 00:42:30.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:30.324 05:53:50 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.324 --rc genhtml_branch_coverage=1 00:42:30.324 --rc genhtml_function_coverage=1 00:42:30.324 --rc genhtml_legend=1 00:42:30.324 --rc geninfo_all_blocks=1 00:42:30.324 --rc geninfo_unexecuted_blocks=1 00:42:30.324 00:42:30.324 ' 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.324 --rc genhtml_branch_coverage=1 00:42:30.324 --rc genhtml_function_coverage=1 00:42:30.324 --rc genhtml_legend=1 00:42:30.324 --rc geninfo_all_blocks=1 00:42:30.324 --rc geninfo_unexecuted_blocks=1 00:42:30.324 00:42:30.324 ' 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.324 --rc genhtml_branch_coverage=1 00:42:30.324 --rc genhtml_function_coverage=1 00:42:30.324 --rc genhtml_legend=1 00:42:30.324 --rc geninfo_all_blocks=1 00:42:30.324 --rc geninfo_unexecuted_blocks=1 00:42:30.324 00:42:30.324 ' 00:42:30.324 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:30.324 --rc genhtml_branch_coverage=1 00:42:30.324 --rc genhtml_function_coverage=1 00:42:30.324 --rc genhtml_legend=1 00:42:30.324 --rc geninfo_all_blocks=1 00:42:30.324 --rc geninfo_unexecuted_blocks=1 00:42:30.324 00:42:30.324 ' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.2sswjkjmu3 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77440 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77440 00:42:30.325 05:53:50 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 77440 ']' 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:42:30.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:42:30.325 05:53:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:42:30.584 [2024-11-20 05:53:50.337444] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:30.584 [2024-11-20 05:53:50.337616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77440 ] 00:42:30.844 [2024-11-20 05:53:50.517952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.844 [2024-11-20 05:53:50.648505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.782 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:42:31.782 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:42:31.782 05:53:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:31.782 05:53:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:42:31.782 05:53:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:31.783 05:53:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:42:31.783 05:53:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:42:31.783 05:53:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:32.042 05:53:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:32.042 05:53:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:42:32.042 05:53:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:32.042 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:42:32.042 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:42:32.042 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:42:32.042 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:42:32.042 05:53:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:32.301 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:42:32.301 { 00:42:32.301 "name": "nvme0n1", 00:42:32.301 "aliases": [ 00:42:32.301 "4ff487cc-a04f-4484-8617-bd09721b9224" 00:42:32.301 ], 00:42:32.301 "product_name": "NVMe disk", 00:42:32.301 "block_size": 4096, 00:42:32.301 "num_blocks": 1310720, 00:42:32.301 "uuid": "4ff487cc-a04f-4484-8617-bd09721b9224", 00:42:32.301 "numa_id": -1, 00:42:32.301 "assigned_rate_limits": { 00:42:32.301 "rw_ios_per_sec": 0, 00:42:32.301 "rw_mbytes_per_sec": 0, 00:42:32.301 "r_mbytes_per_sec": 0, 00:42:32.301 "w_mbytes_per_sec": 0 00:42:32.301 }, 00:42:32.301 "claimed": true, 00:42:32.301 "claim_type": "read_many_write_one", 00:42:32.301 "zoned": false, 00:42:32.301 "supported_io_types": { 00:42:32.301 "read": true, 00:42:32.301 "write": true, 00:42:32.301 "unmap": true, 00:42:32.301 "flush": true, 00:42:32.301 "reset": true, 00:42:32.301 "nvme_admin": true, 00:42:32.301 "nvme_io": true, 00:42:32.301 "nvme_io_md": false, 00:42:32.301 "write_zeroes": true, 00:42:32.301 "zcopy": false, 00:42:32.301 "get_zone_info": false, 00:42:32.301 "zone_management": false, 00:42:32.301 "zone_append": false, 00:42:32.301 "compare": true, 00:42:32.301 "compare_and_write": false, 00:42:32.301 "abort": true, 00:42:32.301 "seek_hole": false, 00:42:32.301 "seek_data": false, 00:42:32.301 "copy": true, 00:42:32.301 "nvme_iov_md": false 00:42:32.301 }, 00:42:32.301 "driver_specific": { 00:42:32.301 "nvme": [ 00:42:32.301 { 00:42:32.301 "pci_address": "0000:00:11.0", 00:42:32.301 "trid": { 00:42:32.301 "trtype": "PCIe", 00:42:32.301 "traddr": "0000:00:11.0" 00:42:32.301 }, 00:42:32.301 "ctrlr_data": { 00:42:32.301 "cntlid": 0, 00:42:32.301 "vendor_id": "0x1b36", 00:42:32.301 "model_number": "QEMU NVMe Ctrl", 00:42:32.301 "serial_number": "12341", 00:42:32.301 "firmware_revision": "8.0.0", 00:42:32.301 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:32.301 "oacs": { 00:42:32.301 "security": 0, 00:42:32.301 "format": 1, 00:42:32.301 "firmware": 0, 00:42:32.301 "ns_manage": 1 00:42:32.301 }, 00:42:32.301 "multi_ctrlr": false, 00:42:32.301 "ana_reporting": false 00:42:32.301 }, 00:42:32.301 "vs": { 00:42:32.301 "nvme_version": "1.4" 00:42:32.301 }, 00:42:32.301 "ns_data": { 00:42:32.301 "id": 1, 00:42:32.301 "can_share": false 00:42:32.301 } 00:42:32.301 } 00:42:32.301 ], 00:42:32.301 "mp_policy": "active_passive" 00:42:32.301 } 00:42:32.301 } 00:42:32.301 ]' 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:42:32.302 05:53:52 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:42:32.302 05:53:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:42:32.302 05:53:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:32.302 05:53:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:42:32.302 05:53:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:32.302 05:53:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:32.561 05:53:52 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=775de4cb-4451-47d2-920c-4058f80b07c3 00:42:32.561 05:53:52 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:42:32.561 05:53:52 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 775de4cb-4451-47d2-920c-4058f80b07c3 00:42:32.820 05:53:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:33.080 05:53:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=9d3d1836-e74f-4f4c-9057-6fd3465622a1 00:42:33.080 05:53:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9d3d1836-e74f-4f4c-9057-6fd3465622a1 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:42:33.339 05:53:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:42:33.339 { 00:42:33.339 "name": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:33.339 "aliases": [ 00:42:33.339 "lvs/nvme0n1p0" 00:42:33.339 ], 00:42:33.339 "product_name": "Logical Volume", 00:42:33.339 "block_size": 4096, 00:42:33.339 "num_blocks": 26476544, 00:42:33.339 "uuid": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:33.339 "assigned_rate_limits": { 00:42:33.339 "rw_ios_per_sec": 0, 00:42:33.339 "rw_mbytes_per_sec": 0, 00:42:33.339 "r_mbytes_per_sec": 0, 00:42:33.339 "w_mbytes_per_sec": 0 00:42:33.339 }, 00:42:33.339 "claimed": false, 00:42:33.339 "zoned": false, 00:42:33.339 "supported_io_types": { 00:42:33.339 "read": true, 00:42:33.339 "write": true, 00:42:33.339 "unmap": true, 00:42:33.339 "flush": false, 00:42:33.339 "reset": true, 00:42:33.339 "nvme_admin": false, 00:42:33.339 "nvme_io": false, 00:42:33.339 "nvme_io_md": false, 00:42:33.339 "write_zeroes": true, 00:42:33.339 "zcopy": false, 00:42:33.339 "get_zone_info": false, 00:42:33.339 "zone_management": false, 00:42:33.339 "zone_append": false, 00:42:33.339 "compare": false, 00:42:33.339 "compare_and_write": false, 00:42:33.339 "abort": false, 00:42:33.339 "seek_hole": true, 00:42:33.339 "seek_data": true, 00:42:33.339 "copy": false, 00:42:33.339 "nvme_iov_md": false 00:42:33.339 }, 00:42:33.339 "driver_specific": { 00:42:33.339 "lvol": { 00:42:33.339 "lvol_store_uuid": "9d3d1836-e74f-4f4c-9057-6fd3465622a1", 00:42:33.339 "base_bdev": "nvme0n1", 00:42:33.339 "thin_provision": true, 00:42:33.339 "num_allocated_clusters": 0, 00:42:33.339 "snapshot": false, 00:42:33.339 "clone": false, 00:42:33.339 "esnap_clone": false 00:42:33.339 } 00:42:33.339 } 00:42:33.339 } 00:42:33.339 ]' 00:42:33.339 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:42:33.598 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:42:33.598 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:42:33.598 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:42:33.598 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:42:33.598 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:42:33.598 05:53:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:42:33.598 05:53:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:42:33.598 05:53:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:33.857 05:53:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:33.857 05:53:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:33.857 05:53:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:33.857 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:42:33.857 { 00:42:33.857 "name": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:33.857 "aliases": [ 00:42:33.857 "lvs/nvme0n1p0" 00:42:33.857 ], 00:42:33.857 "product_name": "Logical Volume", 00:42:33.857 "block_size": 4096, 00:42:33.857 "num_blocks": 26476544, 00:42:33.857 "uuid": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:33.857 "assigned_rate_limits": { 00:42:33.857 "rw_ios_per_sec": 0, 00:42:33.857 "rw_mbytes_per_sec": 0, 00:42:33.857 "r_mbytes_per_sec": 0, 00:42:33.857 "w_mbytes_per_sec": 0 00:42:33.857 }, 00:42:33.857 "claimed": false, 00:42:33.857 "zoned": false, 00:42:33.857 "supported_io_types": { 00:42:33.857 "read": true, 00:42:33.857 "write": true, 00:42:33.857 "unmap": true, 00:42:33.857 "flush": false, 00:42:33.857 "reset": true, 00:42:33.857 "nvme_admin": false, 00:42:33.857 "nvme_io": false, 00:42:33.857 "nvme_io_md": false, 00:42:33.857 "write_zeroes": true, 00:42:33.857 "zcopy": false, 00:42:33.857 "get_zone_info": false, 00:42:33.857 "zone_management": false, 00:42:33.857 "zone_append": false, 00:42:33.857 "compare": false, 00:42:33.857 "compare_and_write": false, 00:42:33.857 "abort": false, 00:42:33.857 "seek_hole": true, 00:42:33.857 "seek_data": true, 00:42:33.857 "copy": false, 00:42:33.857 "nvme_iov_md": false 00:42:33.857 }, 00:42:33.857 "driver_specific": { 00:42:33.857 "lvol": { 00:42:33.857 "lvol_store_uuid": "9d3d1836-e74f-4f4c-9057-6fd3465622a1", 00:42:33.857 "base_bdev": "nvme0n1", 00:42:33.857 "thin_provision": true, 00:42:33.857 "num_allocated_clusters": 0, 00:42:33.857 "snapshot": false, 00:42:33.857 "clone": false, 00:42:33.857 "esnap_clone": false 00:42:33.857 } 00:42:33.857 } 00:42:33.857 } 00:42:33.857 ]' 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:42:34.116 05:53:53 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:42:34.116 05:53:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:42:34.116 05:53:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:34.414 05:53:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:42:34.414 05:53:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 580bc1f6-39c3-440a-aa7c-b65db27de8b6 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:42:34.414 { 00:42:34.414 "name": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:34.414 "aliases": [ 00:42:34.414 "lvs/nvme0n1p0" 00:42:34.414 ], 00:42:34.414 "product_name": "Logical Volume", 00:42:34.414 "block_size": 4096, 00:42:34.414 "num_blocks": 26476544, 00:42:34.414 "uuid": "580bc1f6-39c3-440a-aa7c-b65db27de8b6", 00:42:34.414 "assigned_rate_limits": { 00:42:34.414 "rw_ios_per_sec": 0, 00:42:34.414 "rw_mbytes_per_sec": 0, 00:42:34.414 "r_mbytes_per_sec": 0, 00:42:34.414 "w_mbytes_per_sec": 0 00:42:34.414 }, 00:42:34.414 "claimed": false, 00:42:34.414 "zoned": false, 00:42:34.414 "supported_io_types": { 00:42:34.414 "read": true, 00:42:34.414 "write": true, 00:42:34.414 "unmap": true, 00:42:34.414 "flush": false, 00:42:34.414 "reset": true, 00:42:34.414 "nvme_admin": false, 00:42:34.414 "nvme_io": false, 00:42:34.414 "nvme_io_md": false, 00:42:34.414 "write_zeroes": true, 00:42:34.414 "zcopy": false, 00:42:34.414 "get_zone_info": false, 00:42:34.414 "zone_management": false, 00:42:34.414 "zone_append": false, 00:42:34.414 "compare": false, 00:42:34.414 "compare_and_write": false, 00:42:34.414 "abort": false, 00:42:34.414 "seek_hole": true, 00:42:34.414 "seek_data": true, 00:42:34.414 "copy": false, 00:42:34.414 "nvme_iov_md": false 00:42:34.414 }, 00:42:34.414 "driver_specific": { 00:42:34.414 "lvol": { 00:42:34.414 "lvol_store_uuid": "9d3d1836-e74f-4f4c-9057-6fd3465622a1", 00:42:34.414 "base_bdev": "nvme0n1", 00:42:34.414 "thin_provision": true, 00:42:34.414 "num_allocated_clusters": 0, 00:42:34.414 "snapshot": false, 00:42:34.414 "clone": false, 00:42:34.414 "esnap_clone": false 00:42:34.414 } 00:42:34.414 } 00:42:34.414 } 00:42:34.414 ]' 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:42:34.414 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:42:34.675 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:42:34.675 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:42:34.675 05:53:54 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 580bc1f6-39c3-440a-aa7c-b65db27de8b6 --l2p_dram_limit 10' 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:42:34.675 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:42:34.675 05:53:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 580bc1f6-39c3-440a-aa7c-b65db27de8b6 --l2p_dram_limit 10 -c nvc0n1p0 00:42:34.675 [2024-11-20 05:53:54.564255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.564331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:34.675 [2024-11-20 05:53:54.564349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:34.675 [2024-11-20 05:53:54.564359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.564445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.564456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:34.675 [2024-11-20 05:53:54.564467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:42:34.675 [2024-11-20 05:53:54.564475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.564498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:34.675 [2024-11-20 05:53:54.565631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:34.675 [2024-11-20 05:53:54.565675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.565684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:34.675 [2024-11-20 05:53:54.565697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:42:34.675 [2024-11-20 05:53:54.565704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.565781] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:42:34.675 [2024-11-20 05:53:54.568295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.568333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:34.675 [2024-11-20 05:53:54.568344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:42:34.675 [2024-11-20 05:53:54.568354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.582491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.582540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:34.675 [2024-11-20 05:53:54.582568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.105 ms 00:42:34.675 [2024-11-20 05:53:54.582579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.582691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.582708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:34.675 [2024-11-20 05:53:54.582718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:42:34.675 [2024-11-20 05:53:54.582733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.582796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.582808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:34.675 [2024-11-20 05:53:54.582849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:42:34.675 [2024-11-20 05:53:54.582864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.582890] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:34.675 [2024-11-20 05:53:54.588662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.588697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:34.675 [2024-11-20 05:53:54.588712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.789 ms 00:42:34.675 [2024-11-20 05:53:54.588720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.588760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.675 [2024-11-20 05:53:54.588770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:34.675 [2024-11-20 05:53:54.588781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:34.675 [2024-11-20 05:53:54.588788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.675 [2024-11-20 05:53:54.588833] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:34.675 [2024-11-20 05:53:54.588970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:34.675 [2024-11-20 05:53:54.588994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:34.675 [2024-11-20 05:53:54.589005] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:34.675 [2024-11-20 05:53:54.589018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:34.675 [2024-11-20 05:53:54.589027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:34.675 [2024-11-20 05:53:54.589038] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:34.675 [2024-11-20 05:53:54.589046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:34.675 [2024-11-20 05:53:54.589059] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:34.676 [2024-11-20 05:53:54.589067] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:34.676 [2024-11-20 05:53:54.589077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.676 [2024-11-20 05:53:54.589085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:34.676 [2024-11-20 05:53:54.589096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:42:34.676 [2024-11-20 05:53:54.589116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.676 [2024-11-20 05:53:54.589196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.676 [2024-11-20 05:53:54.589204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:34.676 [2024-11-20 05:53:54.589215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:42:34.676 [2024-11-20 05:53:54.589222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.676 [2024-11-20 05:53:54.589317] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:34.676 [2024-11-20 05:53:54.589330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:34.676 [2024-11-20 05:53:54.589341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:34.676 [2024-11-20 05:53:54.589367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:34.676 [2024-11-20 05:53:54.589393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:34.676 [2024-11-20 05:53:54.589410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:34.676 [2024-11-20 05:53:54.589417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:34.676 [2024-11-20 05:53:54.589426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:34.676 [2024-11-20 05:53:54.589432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:34.676 [2024-11-20 05:53:54.589441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:34.676 [2024-11-20 05:53:54.589447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:34.676 [2024-11-20 05:53:54.589465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:34.676 [2024-11-20 05:53:54.589500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:34.676 [2024-11-20 05:53:54.589522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:34.676 [2024-11-20 05:53:54.589546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:34.676 [2024-11-20 05:53:54.589568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:34.676 [2024-11-20 05:53:54.589597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:34.676 [2024-11-20 05:53:54.589612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:34.676 [2024-11-20 05:53:54.589618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:34.676 [2024-11-20 05:53:54.589627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:34.676 [2024-11-20 05:53:54.589634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:34.676 [2024-11-20 05:53:54.589643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:34.676 [2024-11-20 05:53:54.589649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:34.676 [2024-11-20 05:53:54.589665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:34.676 [2024-11-20 05:53:54.589673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589679] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:34.676 [2024-11-20 05:53:54.589701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:34.676 [2024-11-20 05:53:54.589708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:34.676 [2024-11-20 05:53:54.589727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:34.676 [2024-11-20 05:53:54.589739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:34.676 [2024-11-20 05:53:54.589745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:34.676 [2024-11-20 05:53:54.589754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:34.676 [2024-11-20 05:53:54.589760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:34.676 [2024-11-20 05:53:54.589769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:34.676 [2024-11-20 05:53:54.589781] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:34.676 [2024-11-20 05:53:54.589793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:34.676 [2024-11-20 05:53:54.589832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:34.676 [2024-11-20 05:53:54.589856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:34.676 [2024-11-20 05:53:54.589865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:34.676 [2024-11-20 05:53:54.589873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:34.676 [2024-11-20 05:53:54.589883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:34.676 [2024-11-20 05:53:54.589890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:34.676 [2024-11-20 05:53:54.589901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:34.676 [2024-11-20 05:53:54.589908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:34.676 [2024-11-20 05:53:54.589920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:34.676 [2024-11-20 05:53:54.589966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:34.676 [2024-11-20 05:53:54.589977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:34.676 [2024-11-20 05:53:54.589986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:34.677 [2024-11-20 05:53:54.589997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:34.677 [2024-11-20 05:53:54.590005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:34.677 [2024-11-20 05:53:54.590015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:34.677 [2024-11-20 05:53:54.590023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:34.677 [2024-11-20 05:53:54.590035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:34.677 [2024-11-20 05:53:54.590042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:42:34.677 [2024-11-20 05:53:54.590053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:34.677 [2024-11-20 05:53:54.590099] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:34.677 [2024-11-20 05:53:54.590114] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:42:38.874 [2024-11-20 05:53:58.067163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.067271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:42:38.874 [2024-11-20 05:53:58.067289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3483.769 ms 00:42:38.874 [2024-11-20 05:53:58.067300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.114830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.114882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:38.874 [2024-11-20 05:53:58.114897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.267 ms 00:42:38.874 [2024-11-20 05:53:58.114909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.115079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.115094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:38.874 [2024-11-20 05:53:58.115103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:42:38.874 [2024-11-20 05:53:58.115122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.168280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.168338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:38.874 [2024-11-20 05:53:58.168350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.201 ms 00:42:38.874 [2024-11-20 05:53:58.168362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.168416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.168433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:38.874 [2024-11-20 05:53:58.168443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:42:38.874 [2024-11-20 05:53:58.168453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.169290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.169317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:38.874 [2024-11-20 05:53:58.169328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:42:38.874 [2024-11-20 05:53:58.169337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.169439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.169455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:38.874 [2024-11-20 05:53:58.169466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:42:38.874 [2024-11-20 05:53:58.169480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.193298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.193369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:38.874 [2024-11-20 05:53:58.193382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.836 ms 00:42:38.874 [2024-11-20 05:53:58.193393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.217730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:38.874 [2024-11-20 05:53:58.223014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.223047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:38.874 [2024-11-20 05:53:58.223062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.535 ms 00:42:38.874 [2024-11-20 05:53:58.223070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.317719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.317787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:42:38.874 [2024-11-20 05:53:58.317812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.753 ms 00:42:38.874 [2024-11-20 05:53:58.317822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.874 [2024-11-20 05:53:58.318037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.874 [2024-11-20 05:53:58.318053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:38.875 [2024-11-20 05:53:58.318069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:42:38.875 [2024-11-20 05:53:58.318077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.353551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.353594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:42:38.875 [2024-11-20 05:53:58.353627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.491 ms 00:42:38.875 [2024-11-20 05:53:58.353635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.388243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.388278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:42:38.875 [2024-11-20 05:53:58.388293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.623 ms 00:42:38.875 [2024-11-20 05:53:58.388301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.389104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.389129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:38.875 [2024-11-20 05:53:58.389142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:42:38.875 [2024-11-20 05:53:58.389154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.487913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.487997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:42:38.875 [2024-11-20 05:53:58.488039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.892 ms 00:42:38.875 [2024-11-20 05:53:58.488049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.524308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.524356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:42:38.875 [2024-11-20 05:53:58.524387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.247 ms 00:42:38.875 [2024-11-20 05:53:58.524396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.558805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.558851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:42:38.875 [2024-11-20 05:53:58.558881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.432 ms 00:42:38.875 [2024-11-20 05:53:58.558889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.593263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.593319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:38.875 [2024-11-20 05:53:58.593335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.398 ms 00:42:38.875 [2024-11-20 05:53:58.593342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.593407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.593417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:38.875 [2024-11-20 05:53:58.593431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:38.875 [2024-11-20 05:53:58.593439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.593554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:38.875 [2024-11-20 05:53:58.593566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:38.875 [2024-11-20 05:53:58.593580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:42:38.875 [2024-11-20 05:53:58.593588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:38.875 [2024-11-20 05:53:58.595015] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4038.016 ms, result 0 00:42:38.875 { 00:42:38.875 "name": "ftl0", 00:42:38.875 "uuid": "6682375f-6889-4b06-a1ab-eca8cf79edd1" 00:42:38.875 } 00:42:38.875 05:53:58 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:42:38.875 05:53:58 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:42:39.134 05:53:58 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:42:39.134 05:53:58 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:42:39.134 [2024-11-20 05:53:59.017208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.134 [2024-11-20 05:53:59.017287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:39.134 [2024-11-20 05:53:59.017302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:39.134 [2024-11-20 05:53:59.017324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.134 [2024-11-20 05:53:59.017349] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:39.134 [2024-11-20 05:53:59.022007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.134 [2024-11-20 05:53:59.022044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:39.134 [2024-11-20 05:53:59.022059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:42:39.134 [2024-11-20 05:53:59.022067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.134 [2024-11-20 05:53:59.022343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.134 [2024-11-20 05:53:59.022381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:39.134 [2024-11-20 05:53:59.022393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:42:39.134 [2024-11-20 05:53:59.022401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.134 [2024-11-20 05:53:59.024800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.134 [2024-11-20 05:53:59.024821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:39.134 [2024-11-20 05:53:59.024834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:42:39.134 [2024-11-20 05:53:59.024857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.134 [2024-11-20 05:53:59.029586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.134 [2024-11-20 05:53:59.029622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:39.134 [2024-11-20 05:53:59.029638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.716 ms 00:42:39.134 [2024-11-20 05:53:59.029645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.065578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.065619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:39.400 [2024-11-20 05:53:59.065633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.918 ms 00:42:39.400 [2024-11-20 05:53:59.065640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.086748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.086787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:39.400 [2024-11-20 05:53:59.086807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.083 ms 00:42:39.400 [2024-11-20 05:53:59.086815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.086980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.086992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:39.400 [2024-11-20 05:53:59.087003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:42:39.400 [2024-11-20 05:53:59.087010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.121365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.121401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:39.400 [2024-11-20 05:53:59.121429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.396 ms 00:42:39.400 [2024-11-20 05:53:59.121436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.156160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.156195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:39.400 [2024-11-20 05:53:59.156209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.746 ms 00:42:39.400 [2024-11-20 05:53:59.156217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.190832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.190870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:39.400 [2024-11-20 05:53:59.190899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.636 ms 00:42:39.400 [2024-11-20 05:53:59.190907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.224615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.400 [2024-11-20 05:53:59.224653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:39.400 [2024-11-20 05:53:59.224666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.682 ms 00:42:39.400 [2024-11-20 05:53:59.224674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.400 [2024-11-20 05:53:59.224728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:39.400 [2024-11-20 05:53:59.224744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.224995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:39.400 [2024-11-20 05:53:59.225103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:39.401 [2024-11-20 05:53:59.225696] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:39.401 [2024-11-20 05:53:59.225709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:42:39.401 [2024-11-20 05:53:59.225717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:42:39.401 [2024-11-20 05:53:59.225730] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:42:39.401 [2024-11-20 05:53:59.225738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:42:39.401 [2024-11-20 05:53:59.225752] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:42:39.401 [2024-11-20 05:53:59.225759] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:39.401 [2024-11-20 05:53:59.225769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:39.401 [2024-11-20 05:53:59.225777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:39.401 [2024-11-20 05:53:59.225786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:39.401 [2024-11-20 05:53:59.225792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:39.401 [2024-11-20 05:53:59.225810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.401 [2024-11-20 05:53:59.225819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:39.401 [2024-11-20 05:53:59.225831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:42:39.401 [2024-11-20 05:53:59.225838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.401 [2024-11-20 05:53:59.245986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.401 [2024-11-20 05:53:59.246020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:39.401 [2024-11-20 05:53:59.246049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.133 ms 00:42:39.401 [2024-11-20 05:53:59.246058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.401 [2024-11-20 05:53:59.246681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:39.401 [2024-11-20 05:53:59.246701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:39.402 [2024-11-20 05:53:59.246716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:42:39.402 [2024-11-20 05:53:59.246724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.402 [2024-11-20 05:53:59.313802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.402 [2024-11-20 05:53:59.313846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:39.402 [2024-11-20 05:53:59.313860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.402 [2024-11-20 05:53:59.313868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.402 [2024-11-20 05:53:59.313945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.402 [2024-11-20 05:53:59.313954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:39.402 [2024-11-20 05:53:59.313968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.402 [2024-11-20 05:53:59.313976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.402 [2024-11-20 05:53:59.314081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.402 [2024-11-20 05:53:59.314094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:39.402 [2024-11-20 05:53:59.314105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.402 [2024-11-20 05:53:59.314113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.402 [2024-11-20 05:53:59.314138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.402 [2024-11-20 05:53:59.314146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:39.402 [2024-11-20 05:53:59.314156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.402 [2024-11-20 05:53:59.314164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.444383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.444456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:39.665 [2024-11-20 05:53:59.444474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.444483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:39.665 [2024-11-20 05:53:59.551260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:39.665 [2024-11-20 05:53:59.551436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:39.665 [2024-11-20 05:53:59.551550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:39.665 [2024-11-20 05:53:59.551715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:39.665 [2024-11-20 05:53:59.551794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:39.665 [2024-11-20 05:53:59.551893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.551956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:39.665 [2024-11-20 05:53:59.551965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:39.665 [2024-11-20 05:53:59.551976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:39.665 [2024-11-20 05:53:59.551983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:39.665 [2024-11-20 05:53:59.552136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.919 ms, result 0 00:42:39.665 true 00:42:39.665 05:53:59 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77440 00:42:39.665 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 77440 ']' 00:42:39.665 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 77440 00:42:39.665 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:42:39.665 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:42:39.665 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77440 00:42:39.925 killing process with pid 77440 00:42:39.925 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:42:39.925 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:42:39.925 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77440' 00:42:39.925 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 77440 00:42:39.925 05:53:59 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 77440 00:42:48.053 05:54:06 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:42:50.592 262144+0 records in 00:42:50.592 262144+0 records out 00:42:50.592 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62864 s, 296 MB/s 00:42:50.592 05:54:10 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:42:51.974 05:54:11 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:51.974 [2024-11-20 05:54:11.874030] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:42:51.974 [2024-11-20 05:54:11.874734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77715 ] 00:42:52.234 [2024-11-20 05:54:12.060411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:52.498 [2024-11-20 05:54:12.196119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.770 [2024-11-20 05:54:12.618276] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:52.770 [2024-11-20 05:54:12.618353] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:53.054 [2024-11-20 05:54:12.784711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.784798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:53.054 [2024-11-20 05:54:12.784828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:53.054 [2024-11-20 05:54:12.784836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.784883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.784893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:53.054 [2024-11-20 05:54:12.784904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:42:53.054 [2024-11-20 05:54:12.784911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.784929] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:53.054 [2024-11-20 05:54:12.785807] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:53.054 [2024-11-20 05:54:12.785845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.785854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:53.054 [2024-11-20 05:54:12.785862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:42:53.054 [2024-11-20 05:54:12.785870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.788334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:53.054 [2024-11-20 05:54:12.807867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.807912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:53.054 [2024-11-20 05:54:12.807925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.571 ms 00:42:53.054 [2024-11-20 05:54:12.807933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.808020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.808031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:53.054 [2024-11-20 05:54:12.808040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:42:53.054 [2024-11-20 05:54:12.808047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.820377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.820414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:53.054 [2024-11-20 05:54:12.820441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:42:53.054 [2024-11-20 05:54:12.820457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.820557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.820571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:53.054 [2024-11-20 05:54:12.820580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:42:53.054 [2024-11-20 05:54:12.820588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.820646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.820656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:53.054 [2024-11-20 05:54:12.820664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:53.054 [2024-11-20 05:54:12.820671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.820717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:53.054 [2024-11-20 05:54:12.826370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.826404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:53.054 [2024-11-20 05:54:12.826414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.695 ms 00:42:53.054 [2024-11-20 05:54:12.826428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.826473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.826482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:53.054 [2024-11-20 05:54:12.826490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:42:53.054 [2024-11-20 05:54:12.826499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.826531] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:53.054 [2024-11-20 05:54:12.826559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:53.054 [2024-11-20 05:54:12.826594] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:53.054 [2024-11-20 05:54:12.826617] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:53.054 [2024-11-20 05:54:12.826738] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:53.054 [2024-11-20 05:54:12.826756] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:53.054 [2024-11-20 05:54:12.826766] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:53.054 [2024-11-20 05:54:12.826776] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:53.054 [2024-11-20 05:54:12.826785] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:53.054 [2024-11-20 05:54:12.826794] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:53.054 [2024-11-20 05:54:12.826810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:53.054 [2024-11-20 05:54:12.826819] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:53.054 [2024-11-20 05:54:12.826832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:53.054 [2024-11-20 05:54:12.826855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.826863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:53.054 [2024-11-20 05:54:12.826871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:42:53.054 [2024-11-20 05:54:12.826879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.826950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.054 [2024-11-20 05:54:12.826958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:53.054 [2024-11-20 05:54:12.826965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:42:53.054 [2024-11-20 05:54:12.826972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.054 [2024-11-20 05:54:12.827070] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:53.054 [2024-11-20 05:54:12.827083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:53.054 [2024-11-20 05:54:12.827092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:53.054 [2024-11-20 05:54:12.827099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.054 [2024-11-20 05:54:12.827108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:53.054 [2024-11-20 05:54:12.827115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:53.054 [2024-11-20 05:54:12.827122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:53.054 [2024-11-20 05:54:12.827129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:53.054 [2024-11-20 05:54:12.827136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:53.054 [2024-11-20 05:54:12.827143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:53.054 [2024-11-20 05:54:12.827150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:53.054 [2024-11-20 05:54:12.827157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:53.054 [2024-11-20 05:54:12.827163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:53.054 [2024-11-20 05:54:12.827169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:53.055 [2024-11-20 05:54:12.827175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:53.055 [2024-11-20 05:54:12.827194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:53.055 [2024-11-20 05:54:12.827207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:53.055 [2024-11-20 05:54:12.827227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:53.055 [2024-11-20 05:54:12.827248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:53.055 [2024-11-20 05:54:12.827267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:53.055 [2024-11-20 05:54:12.827285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:53.055 [2024-11-20 05:54:12.827304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:53.055 [2024-11-20 05:54:12.827316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:53.055 [2024-11-20 05:54:12.827322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:53.055 [2024-11-20 05:54:12.827328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:53.055 [2024-11-20 05:54:12.827335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:53.055 [2024-11-20 05:54:12.827341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:53.055 [2024-11-20 05:54:12.827347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:53.055 [2024-11-20 05:54:12.827360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:53.055 [2024-11-20 05:54:12.827366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827371] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:53.055 [2024-11-20 05:54:12.827379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:53.055 [2024-11-20 05:54:12.827385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:53.055 [2024-11-20 05:54:12.827398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:53.055 [2024-11-20 05:54:12.827405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:53.055 [2024-11-20 05:54:12.827411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:53.055 [2024-11-20 05:54:12.827417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:53.055 [2024-11-20 05:54:12.827423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:53.055 [2024-11-20 05:54:12.827429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:53.055 [2024-11-20 05:54:12.827437] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:53.055 [2024-11-20 05:54:12.827446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:53.055 [2024-11-20 05:54:12.827461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:53.055 [2024-11-20 05:54:12.827468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:53.055 [2024-11-20 05:54:12.827475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:53.055 [2024-11-20 05:54:12.827482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:53.055 [2024-11-20 05:54:12.827489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:53.055 [2024-11-20 05:54:12.827497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:53.055 [2024-11-20 05:54:12.827504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:53.055 [2024-11-20 05:54:12.827511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:53.055 [2024-11-20 05:54:12.827517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:53.055 [2024-11-20 05:54:12.827556] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:53.055 [2024-11-20 05:54:12.827570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:53.055 [2024-11-20 05:54:12.827585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:53.055 [2024-11-20 05:54:12.827592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:53.055 [2024-11-20 05:54:12.827599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:53.055 [2024-11-20 05:54:12.827609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.827618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:53.055 [2024-11-20 05:54:12.827625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:42:53.055 [2024-11-20 05:54:12.827632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.876674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.876738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:53.055 [2024-11-20 05:54:12.876750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.084 ms 00:42:53.055 [2024-11-20 05:54:12.876758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.876856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.876866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:53.055 [2024-11-20 05:54:12.876874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:42:53.055 [2024-11-20 05:54:12.876881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.938607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.938655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:53.055 [2024-11-20 05:54:12.938667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.756 ms 00:42:53.055 [2024-11-20 05:54:12.938675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.938734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.938743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:53.055 [2024-11-20 05:54:12.938760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:53.055 [2024-11-20 05:54:12.938768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.939586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.939606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:53.055 [2024-11-20 05:54:12.939615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:42:53.055 [2024-11-20 05:54:12.939623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.939749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.939770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:53.055 [2024-11-20 05:54:12.939779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:42:53.055 [2024-11-20 05:54:12.939796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.055 [2024-11-20 05:54:12.962634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.055 [2024-11-20 05:54:12.962672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:53.055 [2024-11-20 05:54:12.962689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.847 ms 00:42:53.055 [2024-11-20 05:54:12.962698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.330 [2024-11-20 05:54:12.982074] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:42:53.330 [2024-11-20 05:54:12.982115] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:53.330 [2024-11-20 05:54:12.982127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.330 [2024-11-20 05:54:12.982135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:53.330 [2024-11-20 05:54:12.982145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.333 ms 00:42:53.330 [2024-11-20 05:54:12.982152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.330 [2024-11-20 05:54:13.009900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.330 [2024-11-20 05:54:13.009945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:53.330 [2024-11-20 05:54:13.009973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.761 ms 00:42:53.330 [2024-11-20 05:54:13.009981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.330 [2024-11-20 05:54:13.027517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.330 [2024-11-20 05:54:13.027566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:53.330 [2024-11-20 05:54:13.027592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.531 ms 00:42:53.330 [2024-11-20 05:54:13.027599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.330 [2024-11-20 05:54:13.044769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.330 [2024-11-20 05:54:13.044815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:53.330 [2024-11-20 05:54:13.044826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.168 ms 00:42:53.330 [2024-11-20 05:54:13.044850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.330 [2024-11-20 05:54:13.045481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.330 [2024-11-20 05:54:13.045517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:53.331 [2024-11-20 05:54:13.045526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:42:53.331 [2024-11-20 05:54:13.045534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.141723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.141817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:53.331 [2024-11-20 05:54:13.141849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.346 ms 00:42:53.331 [2024-11-20 05:54:13.141865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.152598] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:53.331 [2024-11-20 05:54:13.157228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.157262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:53.331 [2024-11-20 05:54:13.157291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.329 ms 00:42:53.331 [2024-11-20 05:54:13.157300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.157418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.157429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:53.331 [2024-11-20 05:54:13.157439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:42:53.331 [2024-11-20 05:54:13.157447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.157544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.157555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:53.331 [2024-11-20 05:54:13.157564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:42:53.331 [2024-11-20 05:54:13.157572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.157593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.157602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:53.331 [2024-11-20 05:54:13.157611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:53.331 [2024-11-20 05:54:13.157618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.157655] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:53.331 [2024-11-20 05:54:13.157666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.157677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:53.331 [2024-11-20 05:54:13.157685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:42:53.331 [2024-11-20 05:54:13.157692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.193034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.193073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:53.331 [2024-11-20 05:54:13.193100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.392 ms 00:42:53.331 [2024-11-20 05:54:13.193108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.193190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:53.331 [2024-11-20 05:54:13.193200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:53.331 [2024-11-20 05:54:13.193209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:42:53.331 [2024-11-20 05:54:13.193216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:53.331 [2024-11-20 05:54:13.194787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.313 ms, result 0 00:42:54.712  [2024-11-20T05:54:15.570Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T05:54:16.508Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-20T05:54:17.447Z] Copying: 85/1024 [MB] (28 MBps) [2024-11-20T05:54:18.386Z] Copying: 114/1024 [MB] (28 MBps) [2024-11-20T05:54:19.333Z] Copying: 141/1024 [MB] (27 MBps) [2024-11-20T05:54:20.274Z] Copying: 169/1024 [MB] (27 MBps) [2024-11-20T05:54:21.213Z] Copying: 199/1024 [MB] (29 MBps) [2024-11-20T05:54:22.593Z] Copying: 228/1024 [MB] (29 MBps) [2024-11-20T05:54:23.533Z] Copying: 256/1024 [MB] (28 MBps) [2024-11-20T05:54:24.474Z] Copying: 285/1024 [MB] (28 MBps) [2024-11-20T05:54:25.414Z] Copying: 313/1024 [MB] (28 MBps) [2024-11-20T05:54:26.353Z] Copying: 341/1024 [MB] (28 MBps) [2024-11-20T05:54:27.295Z] Copying: 371/1024 [MB] (29 MBps) [2024-11-20T05:54:28.236Z] Copying: 400/1024 [MB] (29 MBps) [2024-11-20T05:54:29.223Z] Copying: 431/1024 [MB] (30 MBps) [2024-11-20T05:54:30.605Z] Copying: 460/1024 [MB] (29 MBps) [2024-11-20T05:54:31.175Z] Copying: 489/1024 [MB] (28 MBps) [2024-11-20T05:54:32.556Z] Copying: 518/1024 [MB] (29 MBps) [2024-11-20T05:54:33.494Z] Copying: 547/1024 [MB] (28 MBps) [2024-11-20T05:54:34.431Z] Copying: 577/1024 [MB] (29 MBps) [2024-11-20T05:54:35.369Z] Copying: 606/1024 [MB] (29 MBps) [2024-11-20T05:54:36.308Z] Copying: 636/1024 [MB] (29 MBps) [2024-11-20T05:54:37.247Z] Copying: 666/1024 [MB] (29 MBps) [2024-11-20T05:54:38.186Z] Copying: 695/1024 [MB] (28 MBps) [2024-11-20T05:54:39.564Z] Copying: 723/1024 [MB] (28 MBps) [2024-11-20T05:54:40.503Z] Copying: 752/1024 [MB] (28 MBps) [2024-11-20T05:54:41.440Z] Copying: 781/1024 [MB] (29 MBps) [2024-11-20T05:54:42.379Z] Copying: 811/1024 [MB] (29 MBps) [2024-11-20T05:54:43.318Z] Copying: 840/1024 [MB] (29 MBps) [2024-11-20T05:54:44.258Z] Copying: 870/1024 [MB] (29 MBps) [2024-11-20T05:54:45.196Z] Copying: 899/1024 [MB] (29 MBps) [2024-11-20T05:54:46.574Z] Copying: 929/1024 [MB] (29 MBps) [2024-11-20T05:54:47.144Z] Copying: 959/1024 [MB] (29 MBps) [2024-11-20T05:54:48.524Z] Copying: 989/1024 [MB] (29 MBps) [2024-11-20T05:54:48.524Z] Copying: 1019/1024 [MB] (29 MBps) [2024-11-20T05:54:48.524Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-20 05:54:48.298116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.298184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:43:28.605 [2024-11-20 05:54:48.298200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:28.605 [2024-11-20 05:54:48.298209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.298231] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:43:28.605 [2024-11-20 05:54:48.303114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.303152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:43:28.605 [2024-11-20 05:54:48.303162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.876 ms 00:43:28.605 [2024-11-20 05:54:48.303178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.305071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.305109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:43:28.605 [2024-11-20 05:54:48.305119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:43:28.605 [2024-11-20 05:54:48.305126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.322436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.322483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:43:28.605 [2024-11-20 05:54:48.322498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.326 ms 00:43:28.605 [2024-11-20 05:54:48.322508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.327551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.327585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:43:28.605 [2024-11-20 05:54:48.327611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.011 ms 00:43:28.605 [2024-11-20 05:54:48.327619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.364507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.364551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:43:28.605 [2024-11-20 05:54:48.364563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.901 ms 00:43:28.605 [2024-11-20 05:54:48.364571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.385816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.385859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:43:28.605 [2024-11-20 05:54:48.385887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.248 ms 00:43:28.605 [2024-11-20 05:54:48.385896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.386046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.386057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:43:28.605 [2024-11-20 05:54:48.386083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:43:28.605 [2024-11-20 05:54:48.386090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.422203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.422245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:43:28.605 [2024-11-20 05:54:48.422257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.168 ms 00:43:28.605 [2024-11-20 05:54:48.422264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.457416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.457457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:43:28.605 [2024-11-20 05:54:48.457497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.183 ms 00:43:28.605 [2024-11-20 05:54:48.457504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.605 [2024-11-20 05:54:48.492174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.605 [2024-11-20 05:54:48.492214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:43:28.605 [2024-11-20 05:54:48.492226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.701 ms 00:43:28.605 [2024-11-20 05:54:48.492234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.864 [2024-11-20 05:54:48.525689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.864 [2024-11-20 05:54:48.525728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:43:28.864 [2024-11-20 05:54:48.525739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.400 ms 00:43:28.864 [2024-11-20 05:54:48.525747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.864 [2024-11-20 05:54:48.525825] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:43:28.864 [2024-11-20 05:54:48.525844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.525999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:43:28.864 [2024-11-20 05:54:48.526168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:43:28.865 [2024-11-20 05:54:48.526617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:43:28.865 [2024-11-20 05:54:48.526631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:43:28.865 [2024-11-20 05:54:48.526646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:43:28.865 [2024-11-20 05:54:48.526653] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:43:28.865 [2024-11-20 05:54:48.526660] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:43:28.865 [2024-11-20 05:54:48.526667] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:43:28.865 [2024-11-20 05:54:48.526675] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:43:28.865 [2024-11-20 05:54:48.526682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:43:28.865 [2024-11-20 05:54:48.526690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:43:28.865 [2024-11-20 05:54:48.526709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:43:28.865 [2024-11-20 05:54:48.526715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:43:28.865 [2024-11-20 05:54:48.526723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.865 [2024-11-20 05:54:48.526731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:43:28.865 [2024-11-20 05:54:48.526739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:43:28.865 [2024-11-20 05:54:48.526746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.547616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.865 [2024-11-20 05:54:48.547650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:43:28.865 [2024-11-20 05:54:48.547677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.881 ms 00:43:28.865 [2024-11-20 05:54:48.547684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.548266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:28.865 [2024-11-20 05:54:48.548286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:43:28.865 [2024-11-20 05:54:48.548294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:43:28.865 [2024-11-20 05:54:48.548302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.601218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:28.865 [2024-11-20 05:54:48.601255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:28.865 [2024-11-20 05:54:48.601266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:28.865 [2024-11-20 05:54:48.601275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.601338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:28.865 [2024-11-20 05:54:48.601348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:28.865 [2024-11-20 05:54:48.601355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:28.865 [2024-11-20 05:54:48.601363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.601495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:28.865 [2024-11-20 05:54:48.601511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:28.865 [2024-11-20 05:54:48.601521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:28.865 [2024-11-20 05:54:48.601529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.865 [2024-11-20 05:54:48.601547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:28.866 [2024-11-20 05:54:48.601555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:28.866 [2024-11-20 05:54:48.601563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:28.866 [2024-11-20 05:54:48.601570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:28.866 [2024-11-20 05:54:48.733897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:28.866 [2024-11-20 05:54:48.733972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:28.866 [2024-11-20 05:54:48.733987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:28.866 [2024-11-20 05:54:48.734012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.124 [2024-11-20 05:54:48.837560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.124 [2024-11-20 05:54:48.837629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:29.124 [2024-11-20 05:54:48.837644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.124 [2024-11-20 05:54:48.837669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.124 [2024-11-20 05:54:48.837787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.124 [2024-11-20 05:54:48.837797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:29.124 [2024-11-20 05:54:48.837806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.124 [2024-11-20 05:54:48.837814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.124 [2024-11-20 05:54:48.837873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.124 [2024-11-20 05:54:48.837882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:29.124 [2024-11-20 05:54:48.837890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.125 [2024-11-20 05:54:48.837897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.125 [2024-11-20 05:54:48.838011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.125 [2024-11-20 05:54:48.838027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:29.125 [2024-11-20 05:54:48.838036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.125 [2024-11-20 05:54:48.838044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.125 [2024-11-20 05:54:48.838081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.125 [2024-11-20 05:54:48.838096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:43:29.125 [2024-11-20 05:54:48.838104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.125 [2024-11-20 05:54:48.838111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.125 [2024-11-20 05:54:48.838154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.125 [2024-11-20 05:54:48.838168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:29.125 [2024-11-20 05:54:48.838176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.125 [2024-11-20 05:54:48.838184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.125 [2024-11-20 05:54:48.838230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:43:29.125 [2024-11-20 05:54:48.838239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:29.125 [2024-11-20 05:54:48.838246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:43:29.125 [2024-11-20 05:54:48.838254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:29.125 [2024-11-20 05:54:48.838407] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.287 ms, result 0 00:43:30.556 00:43:30.556 00:43:30.556 05:54:50 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:43:30.815 [2024-11-20 05:54:50.514529] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:43:30.815 [2024-11-20 05:54:50.514665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78100 ] 00:43:30.815 [2024-11-20 05:54:50.691701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:31.075 [2024-11-20 05:54:50.830845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:31.335 [2024-11-20 05:54:51.246130] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:31.335 [2024-11-20 05:54:51.246223] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:43:31.595 [2024-11-20 05:54:51.406371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.406428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:43:31.595 [2024-11-20 05:54:51.406465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:31.595 [2024-11-20 05:54:51.406473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.406522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.406531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:43:31.595 [2024-11-20 05:54:51.406544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:43:31.595 [2024-11-20 05:54:51.406552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.406570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:43:31.595 [2024-11-20 05:54:51.407482] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:43:31.595 [2024-11-20 05:54:51.407501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.407509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:43:31.595 [2024-11-20 05:54:51.407517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:43:31.595 [2024-11-20 05:54:51.407525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.410042] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:43:31.595 [2024-11-20 05:54:51.429982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.430019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:43:31.595 [2024-11-20 05:54:51.430047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.980 ms 00:43:31.595 [2024-11-20 05:54:51.430055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.430121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.430131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:43:31.595 [2024-11-20 05:54:51.430140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:43:31.595 [2024-11-20 05:54:51.430147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.442507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.442537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:43:31.595 [2024-11-20 05:54:51.442548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.322 ms 00:43:31.595 [2024-11-20 05:54:51.442576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.442663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.442676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:43:31.595 [2024-11-20 05:54:51.442684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:43:31.595 [2024-11-20 05:54:51.442692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.595 [2024-11-20 05:54:51.442744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.595 [2024-11-20 05:54:51.442753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:43:31.595 [2024-11-20 05:54:51.442761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:43:31.596 [2024-11-20 05:54:51.442769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.596 [2024-11-20 05:54:51.442798] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:43:31.596 [2024-11-20 05:54:51.448417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.596 [2024-11-20 05:54:51.448444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:43:31.596 [2024-11-20 05:54:51.448454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.642 ms 00:43:31.596 [2024-11-20 05:54:51.448481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.596 [2024-11-20 05:54:51.448512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.596 [2024-11-20 05:54:51.448520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:43:31.596 [2024-11-20 05:54:51.448528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:31.596 [2024-11-20 05:54:51.448535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.596 [2024-11-20 05:54:51.448571] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:43:31.596 [2024-11-20 05:54:51.448593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:43:31.596 [2024-11-20 05:54:51.448627] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:43:31.596 [2024-11-20 05:54:51.448647] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:43:31.596 [2024-11-20 05:54:51.448736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:43:31.596 [2024-11-20 05:54:51.448746] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:43:31.596 [2024-11-20 05:54:51.448757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:43:31.596 [2024-11-20 05:54:51.448768] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:43:31.596 [2024-11-20 05:54:51.448777] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:43:31.596 [2024-11-20 05:54:51.448786] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:43:31.596 [2024-11-20 05:54:51.448795] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:43:31.596 [2024-11-20 05:54:51.448803] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:43:31.596 [2024-11-20 05:54:51.448813] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:43:31.596 [2024-11-20 05:54:51.448832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.596 [2024-11-20 05:54:51.448840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:43:31.596 [2024-11-20 05:54:51.448848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:43:31.596 [2024-11-20 05:54:51.448855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.596 [2024-11-20 05:54:51.448925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.596 [2024-11-20 05:54:51.448933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:43:31.596 [2024-11-20 05:54:51.448941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:43:31.596 [2024-11-20 05:54:51.448948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.596 [2024-11-20 05:54:51.449045] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:43:31.596 [2024-11-20 05:54:51.449059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:43:31.596 [2024-11-20 05:54:51.449068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:43:31.596 [2024-11-20 05:54:51.449092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:43:31.596 [2024-11-20 05:54:51.449115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:31.596 [2024-11-20 05:54:51.449130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:43:31.596 [2024-11-20 05:54:51.449137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:43:31.596 [2024-11-20 05:54:51.449144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:43:31.596 [2024-11-20 05:54:51.449152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:43:31.596 [2024-11-20 05:54:51.449158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:43:31.596 [2024-11-20 05:54:51.449176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:43:31.596 [2024-11-20 05:54:51.449191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:43:31.596 [2024-11-20 05:54:51.449211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:43:31.596 [2024-11-20 05:54:51.449232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:43:31.596 [2024-11-20 05:54:51.449251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:43:31.596 [2024-11-20 05:54:51.449271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:43:31.596 [2024-11-20 05:54:51.449291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:31.596 [2024-11-20 05:54:51.449304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:43:31.596 [2024-11-20 05:54:51.449311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:43:31.596 [2024-11-20 05:54:51.449317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:43:31.596 [2024-11-20 05:54:51.449323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:43:31.596 [2024-11-20 05:54:51.449329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:43:31.596 [2024-11-20 05:54:51.449335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:43:31.596 [2024-11-20 05:54:51.449348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:43:31.596 [2024-11-20 05:54:51.449354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:43:31.596 [2024-11-20 05:54:51.449367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:43:31.596 [2024-11-20 05:54:51.449375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:43:31.596 [2024-11-20 05:54:51.449389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:43:31.596 [2024-11-20 05:54:51.449396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:43:31.596 [2024-11-20 05:54:51.449403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:43:31.596 [2024-11-20 05:54:51.449409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:43:31.596 [2024-11-20 05:54:51.449415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:43:31.596 [2024-11-20 05:54:51.449422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:43:31.596 [2024-11-20 05:54:51.449430] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:43:31.596 [2024-11-20 05:54:51.449439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:31.596 [2024-11-20 05:54:51.449448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:43:31.596 [2024-11-20 05:54:51.449455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:43:31.596 [2024-11-20 05:54:51.449463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:43:31.596 [2024-11-20 05:54:51.449471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:43:31.596 [2024-11-20 05:54:51.449478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:43:31.596 [2024-11-20 05:54:51.449493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:43:31.596 [2024-11-20 05:54:51.449501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:43:31.596 [2024-11-20 05:54:51.449509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:43:31.596 [2024-11-20 05:54:51.449517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:43:31.596 [2024-11-20 05:54:51.449524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:43:31.596 [2024-11-20 05:54:51.449531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:43:31.597 [2024-11-20 05:54:51.449539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:43:31.597 [2024-11-20 05:54:51.449546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:43:31.597 [2024-11-20 05:54:51.449553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:43:31.597 [2024-11-20 05:54:51.449560] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:43:31.597 [2024-11-20 05:54:51.449572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:43:31.597 [2024-11-20 05:54:51.449582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:43:31.597 [2024-11-20 05:54:51.449589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:43:31.597 [2024-11-20 05:54:51.449597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:43:31.597 [2024-11-20 05:54:51.449605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:43:31.597 [2024-11-20 05:54:51.449613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.597 [2024-11-20 05:54:51.449621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:43:31.597 [2024-11-20 05:54:51.449628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:43:31.597 [2024-11-20 05:54:51.449636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.597 [2024-11-20 05:54:51.497138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.597 [2024-11-20 05:54:51.497180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:31.597 [2024-11-20 05:54:51.497192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.541 ms 00:43:31.597 [2024-11-20 05:54:51.497217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.597 [2024-11-20 05:54:51.497308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.597 [2024-11-20 05:54:51.497317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:31.597 [2024-11-20 05:54:51.497325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:43:31.597 [2024-11-20 05:54:51.497344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.562290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.562331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:31.857 [2024-11-20 05:54:51.562360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.002 ms 00:43:31.857 [2024-11-20 05:54:51.562367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.562415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.562424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:31.857 [2024-11-20 05:54:51.562438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:43:31.857 [2024-11-20 05:54:51.562446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.563292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.563310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:31.857 [2024-11-20 05:54:51.563318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:43:31.857 [2024-11-20 05:54:51.563325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.563449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.563462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:31.857 [2024-11-20 05:54:51.563470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:43:31.857 [2024-11-20 05:54:51.563484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.585029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.585063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:31.857 [2024-11-20 05:54:51.585093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.565 ms 00:43:31.857 [2024-11-20 05:54:51.585101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.604234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:43:31.857 [2024-11-20 05:54:51.604267] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:43:31.857 [2024-11-20 05:54:51.604296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.604305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:43:31.857 [2024-11-20 05:54:51.604314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.108 ms 00:43:31.857 [2024-11-20 05:54:51.604322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.632845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.632896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:43:31.857 [2024-11-20 05:54:51.632924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.535 ms 00:43:31.857 [2024-11-20 05:54:51.632932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.650354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.650386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:43:31.857 [2024-11-20 05:54:51.650396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.414 ms 00:43:31.857 [2024-11-20 05:54:51.650419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.667737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.667768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:43:31.857 [2024-11-20 05:54:51.667779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:43:31.857 [2024-11-20 05:54:51.667786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.668600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.668630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:31.857 [2024-11-20 05:54:51.668640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:43:31.857 [2024-11-20 05:54:51.668651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:31.857 [2024-11-20 05:54:51.764239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:31.857 [2024-11-20 05:54:51.764327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:43:31.857 [2024-11-20 05:54:51.764350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.748 ms 00:43:31.857 [2024-11-20 05:54:51.764359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.775116] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:32.117 [2024-11-20 05:54:51.779824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.779857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:32.117 [2024-11-20 05:54:51.779885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.423 ms 00:43:32.117 [2024-11-20 05:54:51.779894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.780002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.780013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:43:32.117 [2024-11-20 05:54:51.780022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:43:32.117 [2024-11-20 05:54:51.780034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.780137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.780147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:32.117 [2024-11-20 05:54:51.780157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:43:32.117 [2024-11-20 05:54:51.780163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.780185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.780194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:32.117 [2024-11-20 05:54:51.780202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:43:32.117 [2024-11-20 05:54:51.780210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.780248] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:43:32.117 [2024-11-20 05:54:51.780258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.780266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:43:32.117 [2024-11-20 05:54:51.780274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:43:32.117 [2024-11-20 05:54:51.780281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.817292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.817328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:32.117 [2024-11-20 05:54:51.817340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.061 ms 00:43:32.117 [2024-11-20 05:54:51.817355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.817436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:32.117 [2024-11-20 05:54:51.817447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:32.117 [2024-11-20 05:54:51.817457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:43:32.117 [2024-11-20 05:54:51.817465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:32.117 [2024-11-20 05:54:51.819033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.871 ms, result 0 00:43:33.495  [2024-11-20T05:54:54.351Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T05:54:55.287Z] Copying: 63/1024 [MB] (31 MBps) [2024-11-20T05:54:56.224Z] Copying: 94/1024 [MB] (31 MBps) [2024-11-20T05:54:57.162Z] Copying: 127/1024 [MB] (32 MBps) [2024-11-20T05:54:58.099Z] Copying: 159/1024 [MB] (32 MBps) [2024-11-20T05:54:59.037Z] Copying: 192/1024 [MB] (32 MBps) [2024-11-20T05:54:59.979Z] Copying: 225/1024 [MB] (32 MBps) [2024-11-20T05:55:01.358Z] Copying: 257/1024 [MB] (32 MBps) [2024-11-20T05:55:02.296Z] Copying: 291/1024 [MB] (33 MBps) [2024-11-20T05:55:03.232Z] Copying: 321/1024 [MB] (30 MBps) [2024-11-20T05:55:04.170Z] Copying: 353/1024 [MB] (32 MBps) [2024-11-20T05:55:05.107Z] Copying: 386/1024 [MB] (32 MBps) [2024-11-20T05:55:06.045Z] Copying: 419/1024 [MB] (32 MBps) [2024-11-20T05:55:06.983Z] Copying: 452/1024 [MB] (32 MBps) [2024-11-20T05:55:08.366Z] Copying: 484/1024 [MB] (32 MBps) [2024-11-20T05:55:08.962Z] Copying: 516/1024 [MB] (31 MBps) [2024-11-20T05:55:10.340Z] Copying: 548/1024 [MB] (31 MBps) [2024-11-20T05:55:11.279Z] Copying: 581/1024 [MB] (32 MBps) [2024-11-20T05:55:12.216Z] Copying: 613/1024 [MB] (32 MBps) [2024-11-20T05:55:13.152Z] Copying: 647/1024 [MB] (33 MBps) [2024-11-20T05:55:14.088Z] Copying: 680/1024 [MB] (32 MBps) [2024-11-20T05:55:15.024Z] Copying: 712/1024 [MB] (32 MBps) [2024-11-20T05:55:15.961Z] Copying: 745/1024 [MB] (32 MBps) [2024-11-20T05:55:17.338Z] Copying: 777/1024 [MB] (32 MBps) [2024-11-20T05:55:18.274Z] Copying: 809/1024 [MB] (32 MBps) [2024-11-20T05:55:19.210Z] Copying: 841/1024 [MB] (31 MBps) [2024-11-20T05:55:20.145Z] Copying: 874/1024 [MB] (32 MBps) [2024-11-20T05:55:21.080Z] Copying: 906/1024 [MB] (32 MBps) [2024-11-20T05:55:22.015Z] Copying: 938/1024 [MB] (31 MBps) [2024-11-20T05:55:22.951Z] Copying: 970/1024 [MB] (32 MBps) [2024-11-20T05:55:23.887Z] Copying: 1001/1024 [MB] (31 MBps) [2024-11-20T05:55:24.146Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 05:55:24.118880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.227 [2024-11-20 05:55:24.118989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:04.227 [2024-11-20 05:55:24.119012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:04.227 [2024-11-20 05:55:24.119025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.227 [2024-11-20 05:55:24.119057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:04.227 [2024-11-20 05:55:24.124124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.227 [2024-11-20 05:55:24.124179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:04.227 [2024-11-20 05:55:24.124203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:44:04.227 [2024-11-20 05:55:24.124215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.227 [2024-11-20 05:55:24.124983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.227 [2024-11-20 05:55:24.125013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:04.227 [2024-11-20 05:55:24.125027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:44:04.227 [2024-11-20 05:55:24.125039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.227 [2024-11-20 05:55:24.128922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.227 [2024-11-20 05:55:24.128956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:04.227 [2024-11-20 05:55:24.128969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:44:04.227 [2024-11-20 05:55:24.128981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.227 [2024-11-20 05:55:24.136487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.227 [2024-11-20 05:55:24.136544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:04.227 [2024-11-20 05:55:24.136558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.477 ms 00:44:04.227 [2024-11-20 05:55:24.136568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.174686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.174745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:04.487 [2024-11-20 05:55:24.174774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.063 ms 00:44:04.487 [2024-11-20 05:55:24.174781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.194546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.194589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:04.487 [2024-11-20 05:55:24.194601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.759 ms 00:44:04.487 [2024-11-20 05:55:24.194608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.194757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.194776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:04.487 [2024-11-20 05:55:24.194785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:44:04.487 [2024-11-20 05:55:24.194793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.229556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.229593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:04.487 [2024-11-20 05:55:24.229621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.798 ms 00:44:04.487 [2024-11-20 05:55:24.229628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.264531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.264585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:04.487 [2024-11-20 05:55:24.264596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.933 ms 00:44:04.487 [2024-11-20 05:55:24.264604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.299663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.299707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:04.487 [2024-11-20 05:55:24.299718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.075 ms 00:44:04.487 [2024-11-20 05:55:24.299726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.335324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.487 [2024-11-20 05:55:24.335367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:04.487 [2024-11-20 05:55:24.335395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.590 ms 00:44:04.487 [2024-11-20 05:55:24.335403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.487 [2024-11-20 05:55:24.335438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:04.487 [2024-11-20 05:55:24.335455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:04.487 [2024-11-20 05:55:24.335713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.335992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:04.488 [2024-11-20 05:55:24.336245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:04.488 [2024-11-20 05:55:24.336256] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:44:04.488 [2024-11-20 05:55:24.336264] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:04.488 [2024-11-20 05:55:24.336272] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:04.488 [2024-11-20 05:55:24.336279] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:04.488 [2024-11-20 05:55:24.336287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:04.488 [2024-11-20 05:55:24.336294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:04.488 [2024-11-20 05:55:24.336302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:04.488 [2024-11-20 05:55:24.336323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:04.488 [2024-11-20 05:55:24.336331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:04.488 [2024-11-20 05:55:24.336337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:04.488 [2024-11-20 05:55:24.336345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.488 [2024-11-20 05:55:24.336353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:04.488 [2024-11-20 05:55:24.336362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:44:04.488 [2024-11-20 05:55:24.336369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.488 [2024-11-20 05:55:24.357157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.488 [2024-11-20 05:55:24.357193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:04.488 [2024-11-20 05:55:24.357220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.771 ms 00:44:04.488 [2024-11-20 05:55:24.357228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.488 [2024-11-20 05:55:24.357884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:04.488 [2024-11-20 05:55:24.357901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:04.488 [2024-11-20 05:55:24.357911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.637 ms 00:44:04.488 [2024-11-20 05:55:24.357924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.747 [2024-11-20 05:55:24.411676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.747 [2024-11-20 05:55:24.411724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:04.747 [2024-11-20 05:55:24.411752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.747 [2024-11-20 05:55:24.411759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.747 [2024-11-20 05:55:24.411829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.747 [2024-11-20 05:55:24.411838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:04.747 [2024-11-20 05:55:24.411846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.747 [2024-11-20 05:55:24.411859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.747 [2024-11-20 05:55:24.411932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.747 [2024-11-20 05:55:24.411945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:04.747 [2024-11-20 05:55:24.411953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.747 [2024-11-20 05:55:24.411961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.747 [2024-11-20 05:55:24.411978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.747 [2024-11-20 05:55:24.411986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:04.747 [2024-11-20 05:55:24.411994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.747 [2024-11-20 05:55:24.412001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.747 [2024-11-20 05:55:24.543455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.747 [2024-11-20 05:55:24.543535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:04.747 [2024-11-20 05:55:24.543549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.747 [2024-11-20 05:55:24.543557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.644995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:04.748 [2024-11-20 05:55:24.645098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:04.748 [2024-11-20 05:55:24.645237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:04.748 [2024-11-20 05:55:24.645304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:04.748 [2024-11-20 05:55:24.645470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:04.748 [2024-11-20 05:55:24.645543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:04.748 [2024-11-20 05:55:24.645616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:04.748 [2024-11-20 05:55:24.645684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:04.748 [2024-11-20 05:55:24.645692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:04.748 [2024-11-20 05:55:24.645701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:04.748 [2024-11-20 05:55:24.645855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.944 ms, result 0 00:44:06.124 00:44:06.124 00:44:06.124 05:55:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:44:07.501 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:44:07.501 05:55:27 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:44:07.758 [2024-11-20 05:55:27.481927] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:07.758 [2024-11-20 05:55:27.482080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78475 ] 00:44:07.758 [2024-11-20 05:55:27.656245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:08.018 [2024-11-20 05:55:27.786414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:08.287 [2024-11-20 05:55:28.192071] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:08.287 [2024-11-20 05:55:28.192171] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:08.559 [2024-11-20 05:55:28.351756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.351847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:08.559 [2024-11-20 05:55:28.351867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:08.559 [2024-11-20 05:55:28.351875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.351924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.351934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:08.559 [2024-11-20 05:55:28.351946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:44:08.559 [2024-11-20 05:55:28.351953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.351971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:08.559 [2024-11-20 05:55:28.352893] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:08.559 [2024-11-20 05:55:28.352920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.352929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:08.559 [2024-11-20 05:55:28.352938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:44:08.559 [2024-11-20 05:55:28.352946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.355362] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:08.559 [2024-11-20 05:55:28.374998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.375039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:08.559 [2024-11-20 05:55:28.375068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:44:08.559 [2024-11-20 05:55:28.375076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.375140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.375150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:08.559 [2024-11-20 05:55:28.375160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:44:08.559 [2024-11-20 05:55:28.375168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.387557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.387589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:08.559 [2024-11-20 05:55:28.387615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.349 ms 00:44:08.559 [2024-11-20 05:55:28.387628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.387709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.387723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:08.559 [2024-11-20 05:55:28.387731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:44:08.559 [2024-11-20 05:55:28.387738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.387792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.387802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:08.559 [2024-11-20 05:55:28.387810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:08.559 [2024-11-20 05:55:28.387831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.387862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:08.559 [2024-11-20 05:55:28.393401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.393428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:08.559 [2024-11-20 05:55:28.393454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:44:08.559 [2024-11-20 05:55:28.393465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.393502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.393511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:08.559 [2024-11-20 05:55:28.393519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:08.559 [2024-11-20 05:55:28.393525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.393561] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:08.559 [2024-11-20 05:55:28.393583] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:08.559 [2024-11-20 05:55:28.393617] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:08.559 [2024-11-20 05:55:28.393636] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:08.559 [2024-11-20 05:55:28.393742] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:08.559 [2024-11-20 05:55:28.393757] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:08.559 [2024-11-20 05:55:28.393767] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:08.559 [2024-11-20 05:55:28.393778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:08.559 [2024-11-20 05:55:28.393788] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:08.559 [2024-11-20 05:55:28.393796] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:08.559 [2024-11-20 05:55:28.393813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:08.559 [2024-11-20 05:55:28.393821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:08.559 [2024-11-20 05:55:28.393832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:08.559 [2024-11-20 05:55:28.393841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.393848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:08.559 [2024-11-20 05:55:28.393856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:44:08.559 [2024-11-20 05:55:28.393863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.393932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.559 [2024-11-20 05:55:28.393940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:08.559 [2024-11-20 05:55:28.393948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:44:08.559 [2024-11-20 05:55:28.393955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.559 [2024-11-20 05:55:28.394052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:08.559 [2024-11-20 05:55:28.394069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:08.559 [2024-11-20 05:55:28.394078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:08.559 [2024-11-20 05:55:28.394087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.559 [2024-11-20 05:55:28.394095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:08.559 [2024-11-20 05:55:28.394101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:08.559 [2024-11-20 05:55:28.394110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:08.559 [2024-11-20 05:55:28.394117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:08.559 [2024-11-20 05:55:28.394124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:08.559 [2024-11-20 05:55:28.394130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:08.559 [2024-11-20 05:55:28.394137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:08.559 [2024-11-20 05:55:28.394144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:08.559 [2024-11-20 05:55:28.394150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:08.559 [2024-11-20 05:55:28.394157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:08.560 [2024-11-20 05:55:28.394164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:08.560 [2024-11-20 05:55:28.394181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:08.560 [2024-11-20 05:55:28.394195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:08.560 [2024-11-20 05:55:28.394216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:08.560 [2024-11-20 05:55:28.394235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:08.560 [2024-11-20 05:55:28.394255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:08.560 [2024-11-20 05:55:28.394274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:08.560 [2024-11-20 05:55:28.394293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:08.560 [2024-11-20 05:55:28.394306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:08.560 [2024-11-20 05:55:28.394314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:08.560 [2024-11-20 05:55:28.394321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:08.560 [2024-11-20 05:55:28.394327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:08.560 [2024-11-20 05:55:28.394334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:08.560 [2024-11-20 05:55:28.394340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:08.560 [2024-11-20 05:55:28.394353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:08.560 [2024-11-20 05:55:28.394360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:08.560 [2024-11-20 05:55:28.394375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:08.560 [2024-11-20 05:55:28.394382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:08.560 [2024-11-20 05:55:28.394397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:08.560 [2024-11-20 05:55:28.394404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:08.560 [2024-11-20 05:55:28.394410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:08.560 [2024-11-20 05:55:28.394417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:08.560 [2024-11-20 05:55:28.394423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:08.560 [2024-11-20 05:55:28.394430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:08.560 [2024-11-20 05:55:28.394438] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:08.560 [2024-11-20 05:55:28.394446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:08.560 [2024-11-20 05:55:28.394462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:08.560 [2024-11-20 05:55:28.394468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:08.560 [2024-11-20 05:55:28.394476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:08.560 [2024-11-20 05:55:28.394483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:08.560 [2024-11-20 05:55:28.394490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:08.560 [2024-11-20 05:55:28.394496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:08.560 [2024-11-20 05:55:28.394504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:08.560 [2024-11-20 05:55:28.394511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:08.560 [2024-11-20 05:55:28.394519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:08.560 [2024-11-20 05:55:28.394554] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:08.560 [2024-11-20 05:55:28.394565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:08.560 [2024-11-20 05:55:28.394581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:08.560 [2024-11-20 05:55:28.394588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:08.560 [2024-11-20 05:55:28.394595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:08.560 [2024-11-20 05:55:28.394603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.560 [2024-11-20 05:55:28.394610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:08.560 [2024-11-20 05:55:28.394618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:44:08.560 [2024-11-20 05:55:28.394626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.560 [2024-11-20 05:55:28.442149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.560 [2024-11-20 05:55:28.442194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:08.560 [2024-11-20 05:55:28.442206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.561 ms 00:44:08.560 [2024-11-20 05:55:28.442230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.560 [2024-11-20 05:55:28.442328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.560 [2024-11-20 05:55:28.442338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:08.560 [2024-11-20 05:55:28.442346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:44:08.560 [2024-11-20 05:55:28.442353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.819 [2024-11-20 05:55:28.506802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.819 [2024-11-20 05:55:28.506848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:08.820 [2024-11-20 05:55:28.506861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.482 ms 00:44:08.820 [2024-11-20 05:55:28.506869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.506915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.506924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:08.820 [2024-11-20 05:55:28.506937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:44:08.820 [2024-11-20 05:55:28.506945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.507786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.507824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:08.820 [2024-11-20 05:55:28.507834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:44:08.820 [2024-11-20 05:55:28.507841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.507970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.507990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:08.820 [2024-11-20 05:55:28.507998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:44:08.820 [2024-11-20 05:55:28.508012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.530679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.530719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:08.820 [2024-11-20 05:55:28.530735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.686 ms 00:44:08.820 [2024-11-20 05:55:28.530743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.549619] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:08.820 [2024-11-20 05:55:28.549738] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:08.820 [2024-11-20 05:55:28.549754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.549779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:08.820 [2024-11-20 05:55:28.549789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.932 ms 00:44:08.820 [2024-11-20 05:55:28.549797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.577664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.577701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:08.820 [2024-11-20 05:55:28.577713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.869 ms 00:44:08.820 [2024-11-20 05:55:28.577737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.595159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.595193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:08.820 [2024-11-20 05:55:28.595203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.414 ms 00:44:08.820 [2024-11-20 05:55:28.595226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.612336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.612367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:08.820 [2024-11-20 05:55:28.612378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.109 ms 00:44:08.820 [2024-11-20 05:55:28.612384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.613144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.613171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:08.820 [2024-11-20 05:55:28.613180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:44:08.820 [2024-11-20 05:55:28.613192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.708333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.708417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:08.820 [2024-11-20 05:55:28.708440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.300 ms 00:44:08.820 [2024-11-20 05:55:28.708449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.719437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:08.820 [2024-11-20 05:55:28.724471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.724508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:08.820 [2024-11-20 05:55:28.724521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.974 ms 00:44:08.820 [2024-11-20 05:55:28.724529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.724665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.724676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:08.820 [2024-11-20 05:55:28.724684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:08.820 [2024-11-20 05:55:28.724696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.724801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.724811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:08.820 [2024-11-20 05:55:28.724836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:44:08.820 [2024-11-20 05:55:28.724844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.724870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.724879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:08.820 [2024-11-20 05:55:28.724887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:08.820 [2024-11-20 05:55:28.724894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:08.820 [2024-11-20 05:55:28.724936] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:08.820 [2024-11-20 05:55:28.724947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:08.820 [2024-11-20 05:55:28.724955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:08.820 [2024-11-20 05:55:28.724962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:08.820 [2024-11-20 05:55:28.724970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.079 [2024-11-20 05:55:28.760942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.079 [2024-11-20 05:55:28.761006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:09.079 [2024-11-20 05:55:28.761020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.022 ms 00:44:09.079 [2024-11-20 05:55:28.761035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.079 [2024-11-20 05:55:28.761116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.079 [2024-11-20 05:55:28.761126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:09.079 [2024-11-20 05:55:28.761135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:44:09.079 [2024-11-20 05:55:28.761143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.079 [2024-11-20 05:55:28.762757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 411.208 ms, result 0 00:44:10.015  [2024-11-20T05:55:30.870Z] Copying: 28/1024 [MB] (28 MBps) [2024-11-20T05:55:31.809Z] Copying: 57/1024 [MB] (28 MBps) [2024-11-20T05:55:33.189Z] Copying: 86/1024 [MB] (28 MBps) [2024-11-20T05:55:34.129Z] Copying: 116/1024 [MB] (29 MBps) [2024-11-20T05:55:35.068Z] Copying: 144/1024 [MB] (28 MBps) [2024-11-20T05:55:36.007Z] Copying: 174/1024 [MB] (29 MBps) [2024-11-20T05:55:36.946Z] Copying: 204/1024 [MB] (30 MBps) [2024-11-20T05:55:37.884Z] Copying: 234/1024 [MB] (29 MBps) [2024-11-20T05:55:38.822Z] Copying: 263/1024 [MB] (29 MBps) [2024-11-20T05:55:39.780Z] Copying: 293/1024 [MB] (29 MBps) [2024-11-20T05:55:41.157Z] Copying: 321/1024 [MB] (28 MBps) [2024-11-20T05:55:42.093Z] Copying: 350/1024 [MB] (28 MBps) [2024-11-20T05:55:43.031Z] Copying: 380/1024 [MB] (29 MBps) [2024-11-20T05:55:43.969Z] Copying: 409/1024 [MB] (29 MBps) [2024-11-20T05:55:44.905Z] Copying: 438/1024 [MB] (29 MBps) [2024-11-20T05:55:45.843Z] Copying: 468/1024 [MB] (29 MBps) [2024-11-20T05:55:46.794Z] Copying: 497/1024 [MB] (29 MBps) [2024-11-20T05:55:48.171Z] Copying: 527/1024 [MB] (29 MBps) [2024-11-20T05:55:48.738Z] Copying: 556/1024 [MB] (29 MBps) [2024-11-20T05:55:50.119Z] Copying: 585/1024 [MB] (28 MBps) [2024-11-20T05:55:51.058Z] Copying: 614/1024 [MB] (28 MBps) [2024-11-20T05:55:51.996Z] Copying: 643/1024 [MB] (29 MBps) [2024-11-20T05:55:52.935Z] Copying: 672/1024 [MB] (28 MBps) [2024-11-20T05:55:53.875Z] Copying: 699/1024 [MB] (27 MBps) [2024-11-20T05:55:54.815Z] Copying: 726/1024 [MB] (27 MBps) [2024-11-20T05:55:55.755Z] Copying: 755/1024 [MB] (28 MBps) [2024-11-20T05:55:57.137Z] Copying: 783/1024 [MB] (28 MBps) [2024-11-20T05:55:58.077Z] Copying: 812/1024 [MB] (28 MBps) [2024-11-20T05:55:59.016Z] Copying: 841/1024 [MB] (29 MBps) [2024-11-20T05:55:59.955Z] Copying: 870/1024 [MB] (29 MBps) [2024-11-20T05:56:00.894Z] Copying: 900/1024 [MB] (29 MBps) [2024-11-20T05:56:01.835Z] Copying: 929/1024 [MB] (29 MBps) [2024-11-20T05:56:02.774Z] Copying: 958/1024 [MB] (28 MBps) [2024-11-20T05:56:03.713Z] Copying: 986/1024 [MB] (28 MBps) [2024-11-20T05:56:05.092Z] Copying: 1015/1024 [MB] (28 MBps) [2024-11-20T05:56:05.092Z] Copying: 1048456/1048576 [kB] (8456 kBps) [2024-11-20T05:56:05.092Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 05:56:04.828157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.173 [2024-11-20 05:56:04.828270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:45.173 [2024-11-20 05:56:04.828286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:45.173 [2024-11-20 05:56:04.828325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.173 [2024-11-20 05:56:04.829908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:45.173 [2024-11-20 05:56:04.836568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.173 [2024-11-20 05:56:04.836607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:45.174 [2024-11-20 05:56:04.836619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:44:45.174 [2024-11-20 05:56:04.836628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:04.848338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:04.848401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:45.174 [2024-11-20 05:56:04.848414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.194 ms 00:44:45.174 [2024-11-20 05:56:04.848429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:04.872201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:04.872245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:45.174 [2024-11-20 05:56:04.872260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.801 ms 00:44:45.174 [2024-11-20 05:56:04.872270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:04.877356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:04.877384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:45.174 [2024-11-20 05:56:04.877394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.062 ms 00:44:45.174 [2024-11-20 05:56:04.877402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:04.915474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:04.915573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:45.174 [2024-11-20 05:56:04.915590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.094 ms 00:44:45.174 [2024-11-20 05:56:04.915598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:04.936533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:04.936573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:45.174 [2024-11-20 05:56:04.936585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.939 ms 00:44:45.174 [2024-11-20 05:56:04.936593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:05.044745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:05.044819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:45.174 [2024-11-20 05:56:05.044834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.323 ms 00:44:45.174 [2024-11-20 05:56:05.044843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.174 [2024-11-20 05:56:05.081370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.174 [2024-11-20 05:56:05.081404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:45.174 [2024-11-20 05:56:05.081415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.579 ms 00:44:45.174 [2024-11-20 05:56:05.081439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.434 [2024-11-20 05:56:05.115412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.434 [2024-11-20 05:56:05.115460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:45.434 [2024-11-20 05:56:05.115470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.005 ms 00:44:45.434 [2024-11-20 05:56:05.115477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.434 [2024-11-20 05:56:05.149087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.434 [2024-11-20 05:56:05.149120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:45.434 [2024-11-20 05:56:05.149131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.642 ms 00:44:45.434 [2024-11-20 05:56:05.149138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.434 [2024-11-20 05:56:05.182617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.434 [2024-11-20 05:56:05.182649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:45.434 [2024-11-20 05:56:05.182659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.477 ms 00:44:45.435 [2024-11-20 05:56:05.182666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.435 [2024-11-20 05:56:05.182698] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:45.435 [2024-11-20 05:56:05.182727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115712 / 261120 wr_cnt: 1 state: open 00:44:45.435 [2024-11-20 05:56:05.182741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.182995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:45.435 [2024-11-20 05:56:05.183420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:45.436 [2024-11-20 05:56:05.183492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:45.436 [2024-11-20 05:56:05.183500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:44:45.436 [2024-11-20 05:56:05.183507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115712 00:44:45.436 [2024-11-20 05:56:05.183521] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116672 00:44:45.436 [2024-11-20 05:56:05.183528] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115712 00:44:45.436 [2024-11-20 05:56:05.183536] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:44:45.436 [2024-11-20 05:56:05.183543] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:45.436 [2024-11-20 05:56:05.183557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:45.436 [2024-11-20 05:56:05.183574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:45.436 [2024-11-20 05:56:05.183581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:45.436 [2024-11-20 05:56:05.183587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:45.436 [2024-11-20 05:56:05.183594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.436 [2024-11-20 05:56:05.183602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:45.436 [2024-11-20 05:56:05.183610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:44:45.436 [2024-11-20 05:56:05.183617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.203374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.436 [2024-11-20 05:56:05.203406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:45.436 [2024-11-20 05:56:05.203415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.766 ms 00:44:45.436 [2024-11-20 05:56:05.203444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.204111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:45.436 [2024-11-20 05:56:05.204148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:45.436 [2024-11-20 05:56:05.204176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:44:45.436 [2024-11-20 05:56:05.204196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.257458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.436 [2024-11-20 05:56:05.257572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:45.436 [2024-11-20 05:56:05.257607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.436 [2024-11-20 05:56:05.257628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.257706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.436 [2024-11-20 05:56:05.257729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:45.436 [2024-11-20 05:56:05.257814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.436 [2024-11-20 05:56:05.257835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.257965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.436 [2024-11-20 05:56:05.258006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:45.436 [2024-11-20 05:56:05.258041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.436 [2024-11-20 05:56:05.258051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.436 [2024-11-20 05:56:05.258070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.436 [2024-11-20 05:56:05.258078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:45.436 [2024-11-20 05:56:05.258086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.436 [2024-11-20 05:56:05.258093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.391306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.391372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:45.696 [2024-11-20 05:56:05.391392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.391401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.493707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.493774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:45.696 [2024-11-20 05:56:05.493787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.493795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.493932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.493963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:45.696 [2024-11-20 05:56:05.493972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.493985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.494053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:45.696 [2024-11-20 05:56:05.494062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.494069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.494221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:45.696 [2024-11-20 05:56:05.494230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.494237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.494310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:45.696 [2024-11-20 05:56:05.494318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.494326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.494377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:45.696 [2024-11-20 05:56:05.494384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.494392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:45.696 [2024-11-20 05:56:05.494453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:45.696 [2024-11-20 05:56:05.494461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:45.696 [2024-11-20 05:56:05.494469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:45.696 [2024-11-20 05:56:05.494607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 669.564 ms, result 0 00:44:47.613 00:44:47.613 00:44:47.613 05:56:07 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:44:47.613 [2024-11-20 05:56:07.369055] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:44:47.613 [2024-11-20 05:56:07.369181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78880 ] 00:44:47.873 [2024-11-20 05:56:07.544095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.873 [2024-11-20 05:56:07.683339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.440 [2024-11-20 05:56:08.093275] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:48.440 [2024-11-20 05:56:08.093348] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:48.440 [2024-11-20 05:56:08.253016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.253069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:48.440 [2024-11-20 05:56:08.253088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:48.440 [2024-11-20 05:56:08.253095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.253140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.253166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:48.440 [2024-11-20 05:56:08.253177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:44:48.440 [2024-11-20 05:56:08.253185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.253203] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:48.440 [2024-11-20 05:56:08.254152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:48.440 [2024-11-20 05:56:08.254180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.254188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:48.440 [2024-11-20 05:56:08.254197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:44:48.440 [2024-11-20 05:56:08.254204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.256636] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:48.440 [2024-11-20 05:56:08.276076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.276121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:48.440 [2024-11-20 05:56:08.276134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.477 ms 00:44:48.440 [2024-11-20 05:56:08.276143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.276210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.276219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:48.440 [2024-11-20 05:56:08.276228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:44:48.440 [2024-11-20 05:56:08.276234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.288783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.288871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:48.440 [2024-11-20 05:56:08.288886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.515 ms 00:44:48.440 [2024-11-20 05:56:08.288900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.288986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.288999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:48.440 [2024-11-20 05:56:08.289007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:44:48.440 [2024-11-20 05:56:08.289015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.289065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.289076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:48.440 [2024-11-20 05:56:08.289084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:44:48.440 [2024-11-20 05:56:08.289091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.289119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:48.440 [2024-11-20 05:56:08.294667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.294696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:48.440 [2024-11-20 05:56:08.294706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.569 ms 00:44:48.440 [2024-11-20 05:56:08.294718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.294749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.440 [2024-11-20 05:56:08.294757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:48.440 [2024-11-20 05:56:08.294765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:48.440 [2024-11-20 05:56:08.294773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.440 [2024-11-20 05:56:08.294823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:48.440 [2024-11-20 05:56:08.294847] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:48.440 [2024-11-20 05:56:08.294883] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:48.440 [2024-11-20 05:56:08.294919] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:48.440 [2024-11-20 05:56:08.295010] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:48.440 [2024-11-20 05:56:08.295020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:48.440 [2024-11-20 05:56:08.295030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:48.440 [2024-11-20 05:56:08.295040] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:48.440 [2024-11-20 05:56:08.295050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:48.440 [2024-11-20 05:56:08.295059] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:48.440 [2024-11-20 05:56:08.295066] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:48.441 [2024-11-20 05:56:08.295074] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:48.441 [2024-11-20 05:56:08.295084] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:48.441 [2024-11-20 05:56:08.295093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.441 [2024-11-20 05:56:08.295114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:48.441 [2024-11-20 05:56:08.295121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:44:48.441 [2024-11-20 05:56:08.295128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.441 [2024-11-20 05:56:08.295195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.441 [2024-11-20 05:56:08.295204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:48.441 [2024-11-20 05:56:08.295212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:44:48.441 [2024-11-20 05:56:08.295219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.441 [2024-11-20 05:56:08.295314] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:48.441 [2024-11-20 05:56:08.295334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:48.441 [2024-11-20 05:56:08.295343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:48.441 [2024-11-20 05:56:08.295366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:48.441 [2024-11-20 05:56:08.295389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:48.441 [2024-11-20 05:56:08.295404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:48.441 [2024-11-20 05:56:08.295410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:48.441 [2024-11-20 05:56:08.295417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:48.441 [2024-11-20 05:56:08.295423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:48.441 [2024-11-20 05:56:08.295431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:48.441 [2024-11-20 05:56:08.295447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:48.441 [2024-11-20 05:56:08.295460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:48.441 [2024-11-20 05:56:08.295482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:48.441 [2024-11-20 05:56:08.295502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:48.441 [2024-11-20 05:56:08.295522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:48.441 [2024-11-20 05:56:08.295541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:48.441 [2024-11-20 05:56:08.295560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:48.441 [2024-11-20 05:56:08.295572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:48.441 [2024-11-20 05:56:08.295579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:48.441 [2024-11-20 05:56:08.295586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:48.441 [2024-11-20 05:56:08.295594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:48.441 [2024-11-20 05:56:08.295601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:48.441 [2024-11-20 05:56:08.295607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:48.441 [2024-11-20 05:56:08.295620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:48.441 [2024-11-20 05:56:08.295627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295633] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:48.441 [2024-11-20 05:56:08.295640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:48.441 [2024-11-20 05:56:08.295648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:48.441 [2024-11-20 05:56:08.295664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:48.441 [2024-11-20 05:56:08.295670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:48.441 [2024-11-20 05:56:08.295676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:48.441 [2024-11-20 05:56:08.295683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:48.441 [2024-11-20 05:56:08.295688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:48.441 [2024-11-20 05:56:08.295695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:48.441 [2024-11-20 05:56:08.295702] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:48.441 [2024-11-20 05:56:08.295711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:48.441 [2024-11-20 05:56:08.295728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:48.441 [2024-11-20 05:56:08.295735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:48.441 [2024-11-20 05:56:08.295743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:48.441 [2024-11-20 05:56:08.295750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:48.441 [2024-11-20 05:56:08.295757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:48.441 [2024-11-20 05:56:08.295764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:48.441 [2024-11-20 05:56:08.295771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:48.441 [2024-11-20 05:56:08.295777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:48.441 [2024-11-20 05:56:08.295785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:48.441 [2024-11-20 05:56:08.295820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:48.441 [2024-11-20 05:56:08.295912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.295947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:48.441 [2024-11-20 05:56:08.296004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:48.441 [2024-11-20 05:56:08.296096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:48.441 [2024-11-20 05:56:08.296141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:48.441 [2024-11-20 05:56:08.296186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.441 [2024-11-20 05:56:08.296207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:48.441 [2024-11-20 05:56:08.296244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:44:48.441 [2024-11-20 05:56:08.296264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.441 [2024-11-20 05:56:08.343372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.441 [2024-11-20 05:56:08.343460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:48.441 [2024-11-20 05:56:08.343489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.119 ms 00:44:48.441 [2024-11-20 05:56:08.343510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.441 [2024-11-20 05:56:08.343605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.441 [2024-11-20 05:56:08.343660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:48.441 [2024-11-20 05:56:08.343700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:44:48.441 [2024-11-20 05:56:08.343719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.408303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.700 [2024-11-20 05:56:08.408396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:48.700 [2024-11-20 05:56:08.408412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.635 ms 00:44:48.700 [2024-11-20 05:56:08.408422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.408463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.700 [2024-11-20 05:56:08.408473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:48.700 [2024-11-20 05:56:08.408486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:48.700 [2024-11-20 05:56:08.408494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.409349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.700 [2024-11-20 05:56:08.409363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:48.700 [2024-11-20 05:56:08.409372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:44:48.700 [2024-11-20 05:56:08.409381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.409502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.700 [2024-11-20 05:56:08.409531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:48.700 [2024-11-20 05:56:08.409539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:44:48.700 [2024-11-20 05:56:08.409553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.431692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.700 [2024-11-20 05:56:08.431768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:48.700 [2024-11-20 05:56:08.431786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.159 ms 00:44:48.700 [2024-11-20 05:56:08.431795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.700 [2024-11-20 05:56:08.451416] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:44:48.700 [2024-11-20 05:56:08.451449] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:48.701 [2024-11-20 05:56:08.451462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.451470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:48.701 [2024-11-20 05:56:08.451478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.579 ms 00:44:48.701 [2024-11-20 05:56:08.451486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.479242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.479275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:48.701 [2024-11-20 05:56:08.479287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.768 ms 00:44:48.701 [2024-11-20 05:56:08.479295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.496547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.496589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:48.701 [2024-11-20 05:56:08.496599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.232 ms 00:44:48.701 [2024-11-20 05:56:08.496606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.513729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.513761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:48.701 [2024-11-20 05:56:08.513772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.122 ms 00:44:48.701 [2024-11-20 05:56:08.513780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.514572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.514604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:48.701 [2024-11-20 05:56:08.514614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:44:48.701 [2024-11-20 05:56:08.514625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.606080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.701 [2024-11-20 05:56:08.606148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:48.701 [2024-11-20 05:56:08.606172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.607 ms 00:44:48.701 [2024-11-20 05:56:08.606180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.701 [2024-11-20 05:56:08.616568] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:48.960 [2024-11-20 05:56:08.621167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.621196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:48.960 [2024-11-20 05:56:08.621208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.954 ms 00:44:48.960 [2024-11-20 05:56:08.621217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.621309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.621322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:48.960 [2024-11-20 05:56:08.621331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:48.960 [2024-11-20 05:56:08.621344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.623539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.623575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:48.960 [2024-11-20 05:56:08.623585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:44:48.960 [2024-11-20 05:56:08.623592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.623627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.623637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:48.960 [2024-11-20 05:56:08.623647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:48.960 [2024-11-20 05:56:08.623653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.623693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:48.960 [2024-11-20 05:56:08.623703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.623711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:48.960 [2024-11-20 05:56:08.623718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:48.960 [2024-11-20 05:56:08.623726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.658942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.659027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:48.960 [2024-11-20 05:56:08.659055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.266 ms 00:44:48.960 [2024-11-20 05:56:08.659081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.659169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:48.960 [2024-11-20 05:56:08.659229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:48.960 [2024-11-20 05:56:08.659271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:44:48.960 [2024-11-20 05:56:08.659297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:48.960 [2024-11-20 05:56:08.660844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.069 ms, result 0 00:44:50.335  [2024-11-20T05:56:11.191Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T05:56:12.130Z] Copying: 58/1024 [MB] (32 MBps) [2024-11-20T05:56:13.069Z] Copying: 90/1024 [MB] (32 MBps) [2024-11-20T05:56:14.009Z] Copying: 122/1024 [MB] (31 MBps) [2024-11-20T05:56:14.949Z] Copying: 155/1024 [MB] (32 MBps) [2024-11-20T05:56:15.888Z] Copying: 187/1024 [MB] (32 MBps) [2024-11-20T05:56:16.830Z] Copying: 218/1024 [MB] (30 MBps) [2024-11-20T05:56:18.211Z] Copying: 248/1024 [MB] (30 MBps) [2024-11-20T05:56:19.151Z] Copying: 279/1024 [MB] (30 MBps) [2024-11-20T05:56:20.089Z] Copying: 311/1024 [MB] (31 MBps) [2024-11-20T05:56:21.027Z] Copying: 341/1024 [MB] (30 MBps) [2024-11-20T05:56:21.966Z] Copying: 374/1024 [MB] (32 MBps) [2024-11-20T05:56:22.905Z] Copying: 405/1024 [MB] (31 MBps) [2024-11-20T05:56:23.843Z] Copying: 436/1024 [MB] (31 MBps) [2024-11-20T05:56:25.222Z] Copying: 467/1024 [MB] (30 MBps) [2024-11-20T05:56:26.159Z] Copying: 498/1024 [MB] (30 MBps) [2024-11-20T05:56:27.098Z] Copying: 529/1024 [MB] (31 MBps) [2024-11-20T05:56:28.035Z] Copying: 561/1024 [MB] (32 MBps) [2024-11-20T05:56:28.972Z] Copying: 593/1024 [MB] (32 MBps) [2024-11-20T05:56:29.909Z] Copying: 624/1024 [MB] (30 MBps) [2024-11-20T05:56:30.846Z] Copying: 655/1024 [MB] (31 MBps) [2024-11-20T05:56:32.226Z] Copying: 686/1024 [MB] (30 MBps) [2024-11-20T05:56:32.794Z] Copying: 717/1024 [MB] (31 MBps) [2024-11-20T05:56:34.174Z] Copying: 748/1024 [MB] (31 MBps) [2024-11-20T05:56:35.135Z] Copying: 779/1024 [MB] (30 MBps) [2024-11-20T05:56:36.074Z] Copying: 810/1024 [MB] (30 MBps) [2024-11-20T05:56:37.014Z] Copying: 842/1024 [MB] (31 MBps) [2024-11-20T05:56:37.954Z] Copying: 875/1024 [MB] (32 MBps) [2024-11-20T05:56:38.894Z] Copying: 906/1024 [MB] (31 MBps) [2024-11-20T05:56:39.832Z] Copying: 938/1024 [MB] (31 MBps) [2024-11-20T05:56:40.772Z] Copying: 969/1024 [MB] (31 MBps) [2024-11-20T05:56:41.713Z] Copying: 1001/1024 [MB] (32 MBps) [2024-11-20T05:56:41.713Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 05:56:41.512613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.512774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:21.794 [2024-11-20 05:56:41.512797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:45:21.794 [2024-11-20 05:56:41.512829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.512862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:21.794 [2024-11-20 05:56:41.518295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.518334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:21.794 [2024-11-20 05:56:41.518346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.423 ms 00:45:21.794 [2024-11-20 05:56:41.518355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.518573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.518601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:21.794 [2024-11-20 05:56:41.518724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:45:21.794 [2024-11-20 05:56:41.518732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.522762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.522997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:21.794 [2024-11-20 05:56:41.523009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.007 ms 00:45:21.794 [2024-11-20 05:56:41.523019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.528434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.528537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:21.794 [2024-11-20 05:56:41.528552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.387 ms 00:45:21.794 [2024-11-20 05:56:41.528559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.563809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.563918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:21.794 [2024-11-20 05:56:41.563933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.252 ms 00:45:21.794 [2024-11-20 05:56:41.563944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.584062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.584107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:21.794 [2024-11-20 05:56:41.584118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.120 ms 00:45:21.794 [2024-11-20 05:56:41.584127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:21.794 [2024-11-20 05:56:41.706630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:21.794 [2024-11-20 05:56:41.706691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:21.794 [2024-11-20 05:56:41.706705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.701 ms 00:45:21.794 [2024-11-20 05:56:41.706713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.055 [2024-11-20 05:56:41.741388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.055 [2024-11-20 05:56:41.741422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:22.055 [2024-11-20 05:56:41.741432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.724 ms 00:45:22.055 [2024-11-20 05:56:41.741440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.055 [2024-11-20 05:56:41.774398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.055 [2024-11-20 05:56:41.774432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:22.055 [2024-11-20 05:56:41.774456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.989 ms 00:45:22.055 [2024-11-20 05:56:41.774464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.055 [2024-11-20 05:56:41.806658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.055 [2024-11-20 05:56:41.806691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:22.055 [2024-11-20 05:56:41.806701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.221 ms 00:45:22.055 [2024-11-20 05:56:41.806708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.055 [2024-11-20 05:56:41.839023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.055 [2024-11-20 05:56:41.839054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:22.055 [2024-11-20 05:56:41.839064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.309 ms 00:45:22.055 [2024-11-20 05:56:41.839072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.055 [2024-11-20 05:56:41.839102] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:22.055 [2024-11-20 05:56:41.839118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:45:22.055 [2024-11-20 05:56:41.839127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:22.056 [2024-11-20 05:56:41.839757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:22.057 [2024-11-20 05:56:41.839860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:22.057 [2024-11-20 05:56:41.839867] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6682375f-6889-4b06-a1ab-eca8cf79edd1 00:45:22.057 [2024-11-20 05:56:41.839875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:45:22.057 [2024-11-20 05:56:41.839883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16320 00:45:22.057 [2024-11-20 05:56:41.839890] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15360 00:45:22.057 [2024-11-20 05:56:41.839898] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:45:22.057 [2024-11-20 05:56:41.839905] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:22.057 [2024-11-20 05:56:41.839919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:22.057 [2024-11-20 05:56:41.839926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:22.057 [2024-11-20 05:56:41.839944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:22.057 [2024-11-20 05:56:41.839950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:22.057 [2024-11-20 05:56:41.839957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.057 [2024-11-20 05:56:41.839965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:22.057 [2024-11-20 05:56:41.839972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:45:22.057 [2024-11-20 05:56:41.839979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.859494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.057 [2024-11-20 05:56:41.859588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:22.057 [2024-11-20 05:56:41.859601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.524 ms 00:45:22.057 [2024-11-20 05:56:41.859616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.860278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:22.057 [2024-11-20 05:56:41.860298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:22.057 [2024-11-20 05:56:41.860306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:45:22.057 [2024-11-20 05:56:41.860314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.911756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.057 [2024-11-20 05:56:41.911791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:22.057 [2024-11-20 05:56:41.911813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.057 [2024-11-20 05:56:41.911821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.911876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.057 [2024-11-20 05:56:41.911884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:22.057 [2024-11-20 05:56:41.911891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.057 [2024-11-20 05:56:41.911898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.911969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.057 [2024-11-20 05:56:41.911982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:22.057 [2024-11-20 05:56:41.911995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.057 [2024-11-20 05:56:41.912002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.057 [2024-11-20 05:56:41.912017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.057 [2024-11-20 05:56:41.912025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:22.057 [2024-11-20 05:56:41.912032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.057 [2024-11-20 05:56:41.912040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.038103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.038157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:22.317 [2024-11-20 05:56:42.038176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.038185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.136866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:22.317 [2024-11-20 05:56:42.137018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:22.317 [2024-11-20 05:56:42.137142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:22.317 [2024-11-20 05:56:42.137207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:22.317 [2024-11-20 05:56:42.137349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:22.317 [2024-11-20 05:56:42.137412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:22.317 [2024-11-20 05:56:42.137477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:22.317 [2024-11-20 05:56:42.137577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:22.317 [2024-11-20 05:56:42.137585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:22.317 [2024-11-20 05:56:42.137592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:22.317 [2024-11-20 05:56:42.137727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 626.297 ms, result 0 00:45:23.698 00:45:23.698 00:45:23.698 05:56:43 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:25.607 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77440 00:45:25.607 05:56:45 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 77440 ']' 00:45:25.607 05:56:45 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 77440 00:45:25.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77440) - No such process 00:45:25.607 Process with pid 77440 is not found 00:45:25.607 05:56:45 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 77440 is not found' 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:45:25.607 Remove shared memory files 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:45:25.607 05:56:45 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:45:25.607 00:45:25.607 real 2m55.220s 00:45:25.607 user 2m43.008s 00:45:25.607 sys 0m13.822s 00:45:25.607 05:56:45 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:45:25.607 05:56:45 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:45:25.607 ************************************ 00:45:25.607 END TEST ftl_restore 00:45:25.607 ************************************ 00:45:25.607 05:56:45 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:45:25.607 05:56:45 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:45:25.607 05:56:45 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:45:25.607 05:56:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:45:25.607 ************************************ 00:45:25.607 START TEST ftl_dirty_shutdown 00:45:25.607 ************************************ 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:45:25.607 * Looking for test storage... 00:45:25.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:25.607 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:45:25.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.608 --rc genhtml_branch_coverage=1 00:45:25.608 --rc genhtml_function_coverage=1 00:45:25.608 --rc genhtml_legend=1 00:45:25.608 --rc geninfo_all_blocks=1 00:45:25.608 --rc geninfo_unexecuted_blocks=1 00:45:25.608 00:45:25.608 ' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:45:25.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.608 --rc genhtml_branch_coverage=1 00:45:25.608 --rc genhtml_function_coverage=1 00:45:25.608 --rc genhtml_legend=1 00:45:25.608 --rc geninfo_all_blocks=1 00:45:25.608 --rc geninfo_unexecuted_blocks=1 00:45:25.608 00:45:25.608 ' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:45:25.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.608 --rc genhtml_branch_coverage=1 00:45:25.608 --rc genhtml_function_coverage=1 00:45:25.608 --rc genhtml_legend=1 00:45:25.608 --rc geninfo_all_blocks=1 00:45:25.608 --rc geninfo_unexecuted_blocks=1 00:45:25.608 00:45:25.608 ' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:45:25.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:25.608 --rc genhtml_branch_coverage=1 00:45:25.608 --rc genhtml_function_coverage=1 00:45:25.608 --rc genhtml_legend=1 00:45:25.608 --rc geninfo_all_blocks=1 00:45:25.608 --rc geninfo_unexecuted_blocks=1 00:45:25.608 00:45:25.608 ' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79329 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79329 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79329 ']' 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:25.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:45:25.608 05:56:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:45:25.868 [2024-11-20 05:56:45.609337] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:25.868 [2024-11-20 05:56:45.609630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79329 ] 00:45:26.127 [2024-11-20 05:56:45.788698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.127 [2024-11-20 05:56:45.930090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:45:27.066 05:56:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:45:27.635 { 00:45:27.635 "name": "nvme0n1", 00:45:27.635 "aliases": [ 00:45:27.635 "248e3f2b-a6f7-449a-9f31-b5febb4e3cef" 00:45:27.635 ], 00:45:27.635 "product_name": "NVMe disk", 00:45:27.635 "block_size": 4096, 00:45:27.635 "num_blocks": 1310720, 00:45:27.635 "uuid": "248e3f2b-a6f7-449a-9f31-b5febb4e3cef", 00:45:27.635 "numa_id": -1, 00:45:27.635 "assigned_rate_limits": { 00:45:27.635 "rw_ios_per_sec": 0, 00:45:27.635 "rw_mbytes_per_sec": 0, 00:45:27.635 "r_mbytes_per_sec": 0, 00:45:27.635 "w_mbytes_per_sec": 0 00:45:27.635 }, 00:45:27.635 "claimed": true, 00:45:27.635 "claim_type": "read_many_write_one", 00:45:27.635 "zoned": false, 00:45:27.635 "supported_io_types": { 00:45:27.635 "read": true, 00:45:27.635 "write": true, 00:45:27.635 "unmap": true, 00:45:27.635 "flush": true, 00:45:27.635 "reset": true, 00:45:27.635 "nvme_admin": true, 00:45:27.635 "nvme_io": true, 00:45:27.635 "nvme_io_md": false, 00:45:27.635 "write_zeroes": true, 00:45:27.635 "zcopy": false, 00:45:27.635 "get_zone_info": false, 00:45:27.635 "zone_management": false, 00:45:27.635 "zone_append": false, 00:45:27.635 "compare": true, 00:45:27.635 "compare_and_write": false, 00:45:27.635 "abort": true, 00:45:27.635 "seek_hole": false, 00:45:27.635 "seek_data": false, 00:45:27.635 "copy": true, 00:45:27.635 "nvme_iov_md": false 00:45:27.635 }, 00:45:27.635 "driver_specific": { 00:45:27.635 "nvme": [ 00:45:27.635 { 00:45:27.635 "pci_address": "0000:00:11.0", 00:45:27.635 "trid": { 00:45:27.635 "trtype": "PCIe", 00:45:27.635 "traddr": "0000:00:11.0" 00:45:27.635 }, 00:45:27.635 "ctrlr_data": { 00:45:27.635 "cntlid": 0, 00:45:27.635 "vendor_id": "0x1b36", 00:45:27.635 "model_number": "QEMU NVMe Ctrl", 00:45:27.635 "serial_number": "12341", 00:45:27.635 "firmware_revision": "8.0.0", 00:45:27.635 "subnqn": "nqn.2019-08.org.qemu:12341", 00:45:27.635 "oacs": { 00:45:27.635 "security": 0, 00:45:27.635 "format": 1, 00:45:27.635 "firmware": 0, 00:45:27.635 "ns_manage": 1 00:45:27.635 }, 00:45:27.635 "multi_ctrlr": false, 00:45:27.635 "ana_reporting": false 00:45:27.635 }, 00:45:27.635 "vs": { 00:45:27.635 "nvme_version": "1.4" 00:45:27.635 }, 00:45:27.635 "ns_data": { 00:45:27.635 "id": 1, 00:45:27.635 "can_share": false 00:45:27.635 } 00:45:27.635 } 00:45:27.635 ], 00:45:27.635 "mp_policy": "active_passive" 00:45:27.635 } 00:45:27.635 } 00:45:27.635 ]' 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:45:27.635 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=9d3d1836-e74f-4f4c-9057-6fd3465622a1 00:45:27.894 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:45:27.895 05:56:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d3d1836-e74f-4f4c-9057-6fd3465622a1 00:45:28.154 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:45:28.414 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=872a227a-9649-4c34-bf07-6c8d809a2c56 00:45:28.414 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 872a227a-9649-4c34-bf07-6c8d809a2c56 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:45:28.673 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:45:28.674 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f009d64b-6518-430a-9249-c9b327e36be6 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:45:28.934 { 00:45:28.934 "name": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:28.934 "aliases": [ 00:45:28.934 "lvs/nvme0n1p0" 00:45:28.934 ], 00:45:28.934 "product_name": "Logical Volume", 00:45:28.934 "block_size": 4096, 00:45:28.934 "num_blocks": 26476544, 00:45:28.934 "uuid": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:28.934 "assigned_rate_limits": { 00:45:28.934 "rw_ios_per_sec": 0, 00:45:28.934 "rw_mbytes_per_sec": 0, 00:45:28.934 "r_mbytes_per_sec": 0, 00:45:28.934 "w_mbytes_per_sec": 0 00:45:28.934 }, 00:45:28.934 "claimed": false, 00:45:28.934 "zoned": false, 00:45:28.934 "supported_io_types": { 00:45:28.934 "read": true, 00:45:28.934 "write": true, 00:45:28.934 "unmap": true, 00:45:28.934 "flush": false, 00:45:28.934 "reset": true, 00:45:28.934 "nvme_admin": false, 00:45:28.934 "nvme_io": false, 00:45:28.934 "nvme_io_md": false, 00:45:28.934 "write_zeroes": true, 00:45:28.934 "zcopy": false, 00:45:28.934 "get_zone_info": false, 00:45:28.934 "zone_management": false, 00:45:28.934 "zone_append": false, 00:45:28.934 "compare": false, 00:45:28.934 "compare_and_write": false, 00:45:28.934 "abort": false, 00:45:28.934 "seek_hole": true, 00:45:28.934 "seek_data": true, 00:45:28.934 "copy": false, 00:45:28.934 "nvme_iov_md": false 00:45:28.934 }, 00:45:28.934 "driver_specific": { 00:45:28.934 "lvol": { 00:45:28.934 "lvol_store_uuid": "872a227a-9649-4c34-bf07-6c8d809a2c56", 00:45:28.934 "base_bdev": "nvme0n1", 00:45:28.934 "thin_provision": true, 00:45:28.934 "num_allocated_clusters": 0, 00:45:28.934 "snapshot": false, 00:45:28.934 "clone": false, 00:45:28.934 "esnap_clone": false 00:45:28.934 } 00:45:28.934 } 00:45:28.934 } 00:45:28.934 ]' 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:45:28.934 05:56:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:45:29.194 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:45:29.454 { 00:45:29.454 "name": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:29.454 "aliases": [ 00:45:29.454 "lvs/nvme0n1p0" 00:45:29.454 ], 00:45:29.454 "product_name": "Logical Volume", 00:45:29.454 "block_size": 4096, 00:45:29.454 "num_blocks": 26476544, 00:45:29.454 "uuid": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:29.454 "assigned_rate_limits": { 00:45:29.454 "rw_ios_per_sec": 0, 00:45:29.454 "rw_mbytes_per_sec": 0, 00:45:29.454 "r_mbytes_per_sec": 0, 00:45:29.454 "w_mbytes_per_sec": 0 00:45:29.454 }, 00:45:29.454 "claimed": false, 00:45:29.454 "zoned": false, 00:45:29.454 "supported_io_types": { 00:45:29.454 "read": true, 00:45:29.454 "write": true, 00:45:29.454 "unmap": true, 00:45:29.454 "flush": false, 00:45:29.454 "reset": true, 00:45:29.454 "nvme_admin": false, 00:45:29.454 "nvme_io": false, 00:45:29.454 "nvme_io_md": false, 00:45:29.454 "write_zeroes": true, 00:45:29.454 "zcopy": false, 00:45:29.454 "get_zone_info": false, 00:45:29.454 "zone_management": false, 00:45:29.454 "zone_append": false, 00:45:29.454 "compare": false, 00:45:29.454 "compare_and_write": false, 00:45:29.454 "abort": false, 00:45:29.454 "seek_hole": true, 00:45:29.454 "seek_data": true, 00:45:29.454 "copy": false, 00:45:29.454 "nvme_iov_md": false 00:45:29.454 }, 00:45:29.454 "driver_specific": { 00:45:29.454 "lvol": { 00:45:29.454 "lvol_store_uuid": "872a227a-9649-4c34-bf07-6c8d809a2c56", 00:45:29.454 "base_bdev": "nvme0n1", 00:45:29.454 "thin_provision": true, 00:45:29.454 "num_allocated_clusters": 0, 00:45:29.454 "snapshot": false, 00:45:29.454 "clone": false, 00:45:29.454 "esnap_clone": false 00:45:29.454 } 00:45:29.454 } 00:45:29.454 } 00:45:29.454 ]' 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:45:29.454 05:56:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:45:29.714 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f009d64b-6518-430a-9249-c9b327e36be6 00:45:29.973 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:45:29.973 { 00:45:29.973 "name": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:29.973 "aliases": [ 00:45:29.974 "lvs/nvme0n1p0" 00:45:29.974 ], 00:45:29.974 "product_name": "Logical Volume", 00:45:29.974 "block_size": 4096, 00:45:29.974 "num_blocks": 26476544, 00:45:29.974 "uuid": "f009d64b-6518-430a-9249-c9b327e36be6", 00:45:29.974 "assigned_rate_limits": { 00:45:29.974 "rw_ios_per_sec": 0, 00:45:29.974 "rw_mbytes_per_sec": 0, 00:45:29.974 "r_mbytes_per_sec": 0, 00:45:29.974 "w_mbytes_per_sec": 0 00:45:29.974 }, 00:45:29.974 "claimed": false, 00:45:29.974 "zoned": false, 00:45:29.974 "supported_io_types": { 00:45:29.974 "read": true, 00:45:29.974 "write": true, 00:45:29.974 "unmap": true, 00:45:29.974 "flush": false, 00:45:29.974 "reset": true, 00:45:29.974 "nvme_admin": false, 00:45:29.974 "nvme_io": false, 00:45:29.974 "nvme_io_md": false, 00:45:29.974 "write_zeroes": true, 00:45:29.974 "zcopy": false, 00:45:29.974 "get_zone_info": false, 00:45:29.974 "zone_management": false, 00:45:29.974 "zone_append": false, 00:45:29.974 "compare": false, 00:45:29.974 "compare_and_write": false, 00:45:29.974 "abort": false, 00:45:29.974 "seek_hole": true, 00:45:29.974 "seek_data": true, 00:45:29.974 "copy": false, 00:45:29.974 "nvme_iov_md": false 00:45:29.974 }, 00:45:29.974 "driver_specific": { 00:45:29.974 "lvol": { 00:45:29.974 "lvol_store_uuid": "872a227a-9649-4c34-bf07-6c8d809a2c56", 00:45:29.974 "base_bdev": "nvme0n1", 00:45:29.974 "thin_provision": true, 00:45:29.974 "num_allocated_clusters": 0, 00:45:29.974 "snapshot": false, 00:45:29.974 "clone": false, 00:45:29.974 "esnap_clone": false 00:45:29.974 } 00:45:29.974 } 00:45:29.974 } 00:45:29.974 ]' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d f009d64b-6518-430a-9249-c9b327e36be6 --l2p_dram_limit 10' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:45:29.974 05:56:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f009d64b-6518-430a-9249-c9b327e36be6 --l2p_dram_limit 10 -c nvc0n1p0 00:45:30.235 [2024-11-20 05:56:50.014690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.014752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:30.235 [2024-11-20 05:56:50.014770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:30.235 [2024-11-20 05:56:50.014780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.014886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.014898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:30.235 [2024-11-20 05:56:50.014909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:45:30.235 [2024-11-20 05:56:50.014934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.014959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:30.235 [2024-11-20 05:56:50.016022] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:30.235 [2024-11-20 05:56:50.016058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.016068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:30.235 [2024-11-20 05:56:50.016081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:45:30.235 [2024-11-20 05:56:50.016089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.016172] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 32ba1394-50df-48b1-865d-b9cff1078769 00:45:30.235 [2024-11-20 05:56:50.018726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.018800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:45:30.235 [2024-11-20 05:56:50.018845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:45:30.235 [2024-11-20 05:56:50.018867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.032940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.033032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:30.235 [2024-11-20 05:56:50.033061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.968 ms 00:45:30.235 [2024-11-20 05:56:50.033084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.033202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.033244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:30.235 [2024-11-20 05:56:50.033269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:45:30.235 [2024-11-20 05:56:50.033307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.033388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.033441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:30.235 [2024-11-20 05:56:50.033472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:45:30.235 [2024-11-20 05:56:50.033505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.033588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:30.235 [2024-11-20 05:56:50.039924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.039989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:30.235 [2024-11-20 05:56:50.040025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.356 ms 00:45:30.235 [2024-11-20 05:56:50.040064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.040115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.040137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:30.235 [2024-11-20 05:56:50.040170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:45:30.235 [2024-11-20 05:56:50.040195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.040243] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:45:30.235 [2024-11-20 05:56:50.040396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:30.235 [2024-11-20 05:56:50.040444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:30.235 [2024-11-20 05:56:50.040484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:30.235 [2024-11-20 05:56:50.040526] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:30.235 [2024-11-20 05:56:50.040564] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:30.235 [2024-11-20 05:56:50.040606] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:30.235 [2024-11-20 05:56:50.040642] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:30.235 [2024-11-20 05:56:50.040672] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:30.235 [2024-11-20 05:56:50.040692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:30.235 [2024-11-20 05:56:50.040740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.040769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:30.235 [2024-11-20 05:56:50.040809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:45:30.235 [2024-11-20 05:56:50.040858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.040946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.235 [2024-11-20 05:56:50.040976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:30.235 [2024-11-20 05:56:50.041004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:45:30.235 [2024-11-20 05:56:50.041024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.235 [2024-11-20 05:56:50.041129] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:30.235 [2024-11-20 05:56:50.041160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:30.235 [2024-11-20 05:56:50.041182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:30.235 [2024-11-20 05:56:50.041280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:30.235 [2024-11-20 05:56:50.041367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:30.235 [2024-11-20 05:56:50.041438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:30.235 [2024-11-20 05:56:50.041456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:30.235 [2024-11-20 05:56:50.041490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:30.235 [2024-11-20 05:56:50.041520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:30.235 [2024-11-20 05:56:50.041542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:30.235 [2024-11-20 05:56:50.041578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:30.235 [2024-11-20 05:56:50.041639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:30.235 [2024-11-20 05:56:50.041729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:30.235 [2024-11-20 05:56:50.041835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:30.235 [2024-11-20 05:56:50.041931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:30.235 [2024-11-20 05:56:50.041959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:30.235 [2024-11-20 05:56:50.041981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:30.235 [2024-11-20 05:56:50.042018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:30.235 [2024-11-20 05:56:50.042045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:30.235 [2024-11-20 05:56:50.042064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:30.235 [2024-11-20 05:56:50.042088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:30.235 [2024-11-20 05:56:50.042118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:30.236 [2024-11-20 05:56:50.042141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:30.236 [2024-11-20 05:56:50.042162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:30.236 [2024-11-20 05:56:50.042202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:30.236 [2024-11-20 05:56:50.042221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:30.236 [2024-11-20 05:56:50.042246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:30.236 [2024-11-20 05:56:50.042280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.236 [2024-11-20 05:56:50.042303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:30.236 [2024-11-20 05:56:50.042322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:30.236 [2024-11-20 05:56:50.042366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.236 [2024-11-20 05:56:50.042395] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:30.236 [2024-11-20 05:56:50.042419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:30.236 [2024-11-20 05:56:50.042438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:30.236 [2024-11-20 05:56:50.042473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:30.236 [2024-11-20 05:56:50.042492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:30.236 [2024-11-20 05:56:50.042522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:30.236 [2024-11-20 05:56:50.042550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:30.236 [2024-11-20 05:56:50.042560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:30.236 [2024-11-20 05:56:50.042567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:30.236 [2024-11-20 05:56:50.042576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:30.236 [2024-11-20 05:56:50.042590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:30.236 [2024-11-20 05:56:50.042603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:30.236 [2024-11-20 05:56:50.042624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:30.236 [2024-11-20 05:56:50.042632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:30.236 [2024-11-20 05:56:50.042641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:30.236 [2024-11-20 05:56:50.042647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:30.236 [2024-11-20 05:56:50.042657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:30.236 [2024-11-20 05:56:50.042663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:30.236 [2024-11-20 05:56:50.042672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:30.236 [2024-11-20 05:56:50.042679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:30.236 [2024-11-20 05:56:50.042690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:30.236 [2024-11-20 05:56:50.042730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:30.236 [2024-11-20 05:56:50.042740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:30.236 [2024-11-20 05:56:50.042757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:30.236 [2024-11-20 05:56:50.042764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:30.236 [2024-11-20 05:56:50.042774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:30.236 [2024-11-20 05:56:50.042783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:30.236 [2024-11-20 05:56:50.042793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:30.236 [2024-11-20 05:56:50.042809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:45:30.236 [2024-11-20 05:56:50.042820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:30.236 [2024-11-20 05:56:50.042867] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:45:30.236 [2024-11-20 05:56:50.042883] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:45:34.433 [2024-11-20 05:56:53.552440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.433 [2024-11-20 05:56:53.552581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:45:34.434 [2024-11-20 05:56:53.552619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3516.341 ms 00:45:34.434 [2024-11-20 05:56:53.552645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.591574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.591710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:34.434 [2024-11-20 05:56:53.591747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.565 ms 00:45:34.434 [2024-11-20 05:56:53.591771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.591957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.592009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:34.434 [2024-11-20 05:56:53.592038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:45:34.434 [2024-11-20 05:56:53.592068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.637388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.637506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:34.434 [2024-11-20 05:56:53.637555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.319 ms 00:45:34.434 [2024-11-20 05:56:53.637581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.637641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.637710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:34.434 [2024-11-20 05:56:53.637733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:34.434 [2024-11-20 05:56:53.637755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.638297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.638356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:34.434 [2024-11-20 05:56:53.638388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:45:34.434 [2024-11-20 05:56:53.638413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.638527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.638561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:34.434 [2024-11-20 05:56:53.638591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:45:34.434 [2024-11-20 05:56:53.638626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.658900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.659017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:34.434 [2024-11-20 05:56:53.659048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.274 ms 00:45:34.434 [2024-11-20 05:56:53.659072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.681000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:34.434 [2024-11-20 05:56:53.684278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.684339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:34.434 [2024-11-20 05:56:53.684371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.141 ms 00:45:34.434 [2024-11-20 05:56:53.684391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.774028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.774148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:45:34.434 [2024-11-20 05:56:53.774184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.754 ms 00:45:34.434 [2024-11-20 05:56:53.774205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.774407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.774449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:34.434 [2024-11-20 05:56:53.774483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:45:34.434 [2024-11-20 05:56:53.774511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.811809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.811900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:45:34.434 [2024-11-20 05:56:53.811932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.288 ms 00:45:34.434 [2024-11-20 05:56:53.811953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.848054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.848131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:45:34.434 [2024-11-20 05:56:53.848173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.112 ms 00:45:34.434 [2024-11-20 05:56:53.848193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.849025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.849083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:34.434 [2024-11-20 05:56:53.849119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:45:34.434 [2024-11-20 05:56:53.849151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.956042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.956193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:45:34.434 [2024-11-20 05:56:53.956234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.000 ms 00:45:34.434 [2024-11-20 05:56:53.956255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:53.995105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:53.995242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:45:34.434 [2024-11-20 05:56:53.995298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.806 ms 00:45:34.434 [2024-11-20 05:56:53.995339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:54.033063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:54.033170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:45:34.434 [2024-11-20 05:56:54.033204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.728 ms 00:45:34.434 [2024-11-20 05:56:54.033224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:54.069435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:54.069539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:34.434 [2024-11-20 05:56:54.069575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.223 ms 00:45:34.434 [2024-11-20 05:56:54.069597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:54.069658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:54.069749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:34.434 [2024-11-20 05:56:54.069811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:34.434 [2024-11-20 05:56:54.069838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:54.069976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:34.434 [2024-11-20 05:56:54.070028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:34.434 [2024-11-20 05:56:54.070065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:45:34.434 [2024-11-20 05:56:54.070096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:34.434 [2024-11-20 05:56:54.071197] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4063.847 ms, result 0 00:45:34.434 { 00:45:34.434 "name": "ftl0", 00:45:34.434 "uuid": "32ba1394-50df-48b1-865d-b9cff1078769" 00:45:34.434 } 00:45:34.434 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:45:34.434 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:45:34.434 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:45:34.434 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:45:34.434 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:45:34.694 /dev/nbd0 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:45:34.694 1+0 records in 00:45:34.694 1+0 records out 00:45:34.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302014 s, 13.6 MB/s 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:45:34.694 05:56:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:45:34.954 [2024-11-20 05:56:54.651611] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:34.954 [2024-11-20 05:56:54.651718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79483 ] 00:45:34.954 [2024-11-20 05:56:54.823917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:35.213 [2024-11-20 05:56:54.935252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:36.601  [2024-11-20T05:56:57.461Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-20T05:56:58.400Z] Copying: 456/1024 [MB] (228 MBps) [2024-11-20T05:56:59.337Z] Copying: 686/1024 [MB] (229 MBps) [2024-11-20T05:56:59.903Z] Copying: 905/1024 [MB] (219 MBps) [2024-11-20T05:57:01.281Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:45:41.362 00:45:41.362 05:57:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:45:43.271 05:57:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:45:43.271 [2024-11-20 05:57:02.807602] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:45:43.271 [2024-11-20 05:57:02.807786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79567 ] 00:45:43.271 [2024-11-20 05:57:02.981914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:43.271 [2024-11-20 05:57:03.094415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:45:44.652  [2024-11-20T05:57:05.509Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-20T05:57:06.447Z] Copying: 41/1024 [MB] (21 MBps) [2024-11-20T05:57:07.826Z] Copying: 60/1024 [MB] (18 MBps) [2024-11-20T05:57:08.764Z] Copying: 81/1024 [MB] (20 MBps) [2024-11-20T05:57:09.700Z] Copying: 101/1024 [MB] (20 MBps) [2024-11-20T05:57:10.637Z] Copying: 122/1024 [MB] (20 MBps) [2024-11-20T05:57:11.576Z] Copying: 142/1024 [MB] (20 MBps) [2024-11-20T05:57:12.531Z] Copying: 161/1024 [MB] (18 MBps) [2024-11-20T05:57:13.468Z] Copying: 182/1024 [MB] (20 MBps) [2024-11-20T05:57:14.405Z] Copying: 202/1024 [MB] (20 MBps) [2024-11-20T05:57:15.783Z] Copying: 223/1024 [MB] (20 MBps) [2024-11-20T05:57:16.718Z] Copying: 244/1024 [MB] (20 MBps) [2024-11-20T05:57:17.653Z] Copying: 264/1024 [MB] (20 MBps) [2024-11-20T05:57:18.589Z] Copying: 285/1024 [MB] (20 MBps) [2024-11-20T05:57:19.526Z] Copying: 306/1024 [MB] (20 MBps) [2024-11-20T05:57:20.465Z] Copying: 327/1024 [MB] (21 MBps) [2024-11-20T05:57:21.403Z] Copying: 348/1024 [MB] (20 MBps) [2024-11-20T05:57:22.792Z] Copying: 369/1024 [MB] (21 MBps) [2024-11-20T05:57:23.387Z] Copying: 390/1024 [MB] (20 MBps) [2024-11-20T05:57:24.766Z] Copying: 410/1024 [MB] (20 MBps) [2024-11-20T05:57:25.705Z] Copying: 430/1024 [MB] (19 MBps) [2024-11-20T05:57:26.641Z] Copying: 450/1024 [MB] (20 MBps) [2024-11-20T05:57:27.581Z] Copying: 472/1024 [MB] (21 MBps) [2024-11-20T05:57:28.519Z] Copying: 493/1024 [MB] (21 MBps) [2024-11-20T05:57:29.467Z] Copying: 515/1024 [MB] (21 MBps) [2024-11-20T05:57:30.406Z] Copying: 536/1024 [MB] (21 MBps) [2024-11-20T05:57:31.784Z] Copying: 557/1024 [MB] (21 MBps) [2024-11-20T05:57:32.352Z] Copying: 578/1024 [MB] (20 MBps) [2024-11-20T05:57:33.732Z] Copying: 600/1024 [MB] (21 MBps) [2024-11-20T05:57:34.672Z] Copying: 621/1024 [MB] (21 MBps) [2024-11-20T05:57:35.610Z] Copying: 642/1024 [MB] (20 MBps) [2024-11-20T05:57:36.548Z] Copying: 664/1024 [MB] (22 MBps) [2024-11-20T05:57:37.501Z] Copying: 685/1024 [MB] (21 MBps) [2024-11-20T05:57:38.440Z] Copying: 706/1024 [MB] (20 MBps) [2024-11-20T05:57:39.387Z] Copying: 726/1024 [MB] (19 MBps) [2024-11-20T05:57:40.768Z] Copying: 746/1024 [MB] (20 MBps) [2024-11-20T05:57:41.338Z] Copying: 766/1024 [MB] (20 MBps) [2024-11-20T05:57:42.718Z] Copying: 787/1024 [MB] (20 MBps) [2024-11-20T05:57:43.656Z] Copying: 808/1024 [MB] (20 MBps) [2024-11-20T05:57:44.662Z] Copying: 829/1024 [MB] (20 MBps) [2024-11-20T05:57:45.600Z] Copying: 849/1024 [MB] (20 MBps) [2024-11-20T05:57:46.538Z] Copying: 869/1024 [MB] (20 MBps) [2024-11-20T05:57:47.476Z] Copying: 889/1024 [MB] (20 MBps) [2024-11-20T05:57:48.427Z] Copying: 910/1024 [MB] (20 MBps) [2024-11-20T05:57:49.364Z] Copying: 930/1024 [MB] (20 MBps) [2024-11-20T05:57:50.744Z] Copying: 951/1024 [MB] (20 MBps) [2024-11-20T05:57:51.682Z] Copying: 971/1024 [MB] (20 MBps) [2024-11-20T05:57:52.622Z] Copying: 992/1024 [MB] (20 MBps) [2024-11-20T05:57:53.192Z] Copying: 1011/1024 [MB] (19 MBps) [2024-11-20T05:57:54.593Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:46:34.674 00:46:34.674 05:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:46:34.674 05:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:46:34.674 05:57:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:46:34.934 [2024-11-20 05:57:54.618941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.619010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:46:34.934 [2024-11-20 05:57:54.619027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:34.934 [2024-11-20 05:57:54.619038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.619068] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:46:34.934 [2024-11-20 05:57:54.624154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.624192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:46:34.934 [2024-11-20 05:57:54.624206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.071 ms 00:46:34.934 [2024-11-20 05:57:54.624215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.626537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.626667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:46:34.934 [2024-11-20 05:57:54.626690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:46:34.934 [2024-11-20 05:57:54.626700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.644222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.644264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:46:34.934 [2024-11-20 05:57:54.644279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.517 ms 00:46:34.934 [2024-11-20 05:57:54.644288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.649532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.649597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:46:34.934 [2024-11-20 05:57:54.649634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.210 ms 00:46:34.934 [2024-11-20 05:57:54.649655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.688105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.688212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:46:34.934 [2024-11-20 05:57:54.688254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.421 ms 00:46:34.934 [2024-11-20 05:57:54.688276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.711352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.711471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:46:34.934 [2024-11-20 05:57:54.711512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.028 ms 00:46:34.934 [2024-11-20 05:57:54.711544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.711748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.711792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:46:34.934 [2024-11-20 05:57:54.711862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:46:34.934 [2024-11-20 05:57:54.711893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.749585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.749727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:46:34.934 [2024-11-20 05:57:54.749765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.714 ms 00:46:34.934 [2024-11-20 05:57:54.749793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.785392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.785478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:46:34.934 [2024-11-20 05:57:54.785506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.570 ms 00:46:34.934 [2024-11-20 05:57:54.785515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:34.934 [2024-11-20 05:57:54.821467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:34.934 [2024-11-20 05:57:54.821520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:46:34.934 [2024-11-20 05:57:54.821535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.971 ms 00:46:34.934 [2024-11-20 05:57:54.821543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.195 [2024-11-20 05:57:54.858163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.195 [2024-11-20 05:57:54.858206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:46:35.195 [2024-11-20 05:57:54.858220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.585 ms 00:46:35.195 [2024-11-20 05:57:54.858229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.195 [2024-11-20 05:57:54.858273] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:46:35.195 [2024-11-20 05:57:54.858289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:46:35.195 [2024-11-20 05:57:54.858714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.858998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:46:35.196 [2024-11-20 05:57:54.859331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:46:35.196 [2024-11-20 05:57:54.859342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32ba1394-50df-48b1-865d-b9cff1078769 00:46:35.196 [2024-11-20 05:57:54.859353] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:46:35.196 [2024-11-20 05:57:54.859367] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:46:35.196 [2024-11-20 05:57:54.859375] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:46:35.196 [2024-11-20 05:57:54.859390] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:46:35.196 [2024-11-20 05:57:54.859398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:46:35.196 [2024-11-20 05:57:54.859409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:46:35.196 [2024-11-20 05:57:54.859423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:46:35.196 [2024-11-20 05:57:54.859434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:46:35.196 [2024-11-20 05:57:54.859441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:46:35.196 [2024-11-20 05:57:54.859452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.196 [2024-11-20 05:57:54.859461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:46:35.196 [2024-11-20 05:57:54.859473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.184 ms 00:46:35.196 [2024-11-20 05:57:54.859481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.881856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.196 [2024-11-20 05:57:54.881901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:46:35.196 [2024-11-20 05:57:54.881916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.361 ms 00:46:35.196 [2024-11-20 05:57:54.881930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.882601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:35.196 [2024-11-20 05:57:54.882616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:46:35.196 [2024-11-20 05:57:54.882628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:46:35.196 [2024-11-20 05:57:54.882636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.954648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.196 [2024-11-20 05:57:54.954709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:35.196 [2024-11-20 05:57:54.954724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.196 [2024-11-20 05:57:54.954749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.954861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.196 [2024-11-20 05:57:54.954873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:35.196 [2024-11-20 05:57:54.954884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.196 [2024-11-20 05:57:54.954892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.955065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.196 [2024-11-20 05:57:54.955083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:35.196 [2024-11-20 05:57:54.955095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.196 [2024-11-20 05:57:54.955103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:54.955131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.196 [2024-11-20 05:57:54.955140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:35.196 [2024-11-20 05:57:54.955152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.196 [2024-11-20 05:57:54.955160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.196 [2024-11-20 05:57:55.095098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.196 [2024-11-20 05:57:55.095177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:35.196 [2024-11-20 05:57:55.095195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.196 [2024-11-20 05:57:55.095203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:35.456 [2024-11-20 05:57:55.203363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.203372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:35.456 [2024-11-20 05:57:55.203540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.203552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:35.456 [2024-11-20 05:57:55.203631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.203639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:35.456 [2024-11-20 05:57:55.203789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.203799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:46:35.456 [2024-11-20 05:57:55.203910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.203919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.203980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.203991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:35.456 [2024-11-20 05:57:55.204001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.204009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.204070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:46:35.456 [2024-11-20 05:57:55.204080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:35.456 [2024-11-20 05:57:55.204091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:46:35.456 [2024-11-20 05:57:55.204098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:35.456 [2024-11-20 05:57:55.204252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 586.405 ms, result 0 00:46:35.456 true 00:46:35.456 05:57:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79329 00:46:35.456 05:57:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79329 00:46:35.456 05:57:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:46:35.456 [2024-11-20 05:57:55.334604] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:46:35.456 [2024-11-20 05:57:55.334737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80103 ] 00:46:35.715 [2024-11-20 05:57:55.511913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:35.973 [2024-11-20 05:57:55.655215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:37.350  [2024-11-20T05:57:58.208Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-20T05:57:59.146Z] Copying: 388/1024 [MB] (187 MBps) [2024-11-20T05:58:00.087Z] Copying: 601/1024 [MB] (212 MBps) [2024-11-20T05:58:01.476Z] Copying: 814/1024 [MB] (213 MBps) [2024-11-20T05:58:02.413Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:46:42.494 00:46:42.494 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79329 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:46:42.494 05:58:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:42.494 [2024-11-20 05:58:02.361918] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:46:42.494 [2024-11-20 05:58:02.362059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80180 ] 00:46:42.753 [2024-11-20 05:58:02.540059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:43.013 [2024-11-20 05:58:02.681288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:43.272 [2024-11-20 05:58:03.120527] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:43.272 [2024-11-20 05:58:03.120724] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:46:43.272 [2024-11-20 05:58:03.186597] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:46:43.272 [2024-11-20 05:58:03.186952] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:46:43.272 [2024-11-20 05:58:03.187135] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:46:43.842 [2024-11-20 05:58:03.466201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.466370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:46:43.842 [2024-11-20 05:58:03.466390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:46:43.842 [2024-11-20 05:58:03.466400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.466478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.466490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:46:43.842 [2024-11-20 05:58:03.466499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:46:43.842 [2024-11-20 05:58:03.466507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.466529] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:46:43.842 [2024-11-20 05:58:03.467713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:46:43.842 [2024-11-20 05:58:03.467746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.467757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:46:43.842 [2024-11-20 05:58:03.467767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:46:43.842 [2024-11-20 05:58:03.467775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.470401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:46:43.842 [2024-11-20 05:58:03.493625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.493676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:46:43.842 [2024-11-20 05:58:03.493691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.268 ms 00:46:43.842 [2024-11-20 05:58:03.493701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.493787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.493798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:46:43.842 [2024-11-20 05:58:03.493822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:46:43.842 [2024-11-20 05:58:03.493830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.507250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.507375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:46:43.842 [2024-11-20 05:58:03.507393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.365 ms 00:46:43.842 [2024-11-20 05:58:03.507401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.507502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.507517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:46:43.842 [2024-11-20 05:58:03.507526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:46:43.842 [2024-11-20 05:58:03.507534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.507616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.507627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:46:43.842 [2024-11-20 05:58:03.507636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:46:43.842 [2024-11-20 05:58:03.507644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.507672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:46:43.842 [2024-11-20 05:58:03.513800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.513899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:46:43.842 [2024-11-20 05:58:03.513915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.149 ms 00:46:43.842 [2024-11-20 05:58:03.513924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.842 [2024-11-20 05:58:03.513961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.842 [2024-11-20 05:58:03.513971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:46:43.842 [2024-11-20 05:58:03.513981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:43.843 [2024-11-20 05:58:03.513989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.843 [2024-11-20 05:58:03.514033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:46:43.843 [2024-11-20 05:58:03.514081] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:46:43.843 [2024-11-20 05:58:03.514124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:46:43.843 [2024-11-20 05:58:03.514140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:46:43.843 [2024-11-20 05:58:03.514234] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:46:43.843 [2024-11-20 05:58:03.514247] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:46:43.843 [2024-11-20 05:58:03.514258] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:46:43.843 [2024-11-20 05:58:03.514269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514283] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514292] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:46:43.843 [2024-11-20 05:58:03.514300] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:46:43.843 [2024-11-20 05:58:03.514308] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:46:43.843 [2024-11-20 05:58:03.514316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:46:43.843 [2024-11-20 05:58:03.514325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.843 [2024-11-20 05:58:03.514333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:46:43.843 [2024-11-20 05:58:03.514341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:46:43.843 [2024-11-20 05:58:03.514349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.843 [2024-11-20 05:58:03.514425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.843 [2024-11-20 05:58:03.514438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:46:43.843 [2024-11-20 05:58:03.514446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:46:43.843 [2024-11-20 05:58:03.514454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.843 [2024-11-20 05:58:03.514558] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:46:43.843 [2024-11-20 05:58:03.514572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:46:43.843 [2024-11-20 05:58:03.514581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:46:43.843 [2024-11-20 05:58:03.514605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:46:43.843 [2024-11-20 05:58:03.514629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:43.843 [2024-11-20 05:58:03.514646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:46:43.843 [2024-11-20 05:58:03.514665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:46:43.843 [2024-11-20 05:58:03.514672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:46:43.843 [2024-11-20 05:58:03.514680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:46:43.843 [2024-11-20 05:58:03.514687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:46:43.843 [2024-11-20 05:58:03.514694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:46:43.843 [2024-11-20 05:58:03.514720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:46:43.843 [2024-11-20 05:58:03.514742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:46:43.843 [2024-11-20 05:58:03.514761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:46:43.843 [2024-11-20 05:58:03.514779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:46:43.843 [2024-11-20 05:58:03.514798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:46:43.843 [2024-11-20 05:58:03.514831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:43.843 [2024-11-20 05:58:03.514845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:46:43.843 [2024-11-20 05:58:03.514852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:46:43.843 [2024-11-20 05:58:03.514858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:46:43.843 [2024-11-20 05:58:03.514865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:46:43.843 [2024-11-20 05:58:03.514872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:46:43.843 [2024-11-20 05:58:03.514879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:46:43.843 [2024-11-20 05:58:03.514893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:46:43.843 [2024-11-20 05:58:03.514900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514908] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:46:43.843 [2024-11-20 05:58:03.514916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:46:43.843 [2024-11-20 05:58:03.514923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:46:43.843 [2024-11-20 05:58:03.514942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:46:43.843 [2024-11-20 05:58:03.514949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:46:43.843 [2024-11-20 05:58:03.514956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:46:43.843 [2024-11-20 05:58:03.514963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:46:43.843 [2024-11-20 05:58:03.514969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:46:43.843 [2024-11-20 05:58:03.514976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:46:43.843 [2024-11-20 05:58:03.514985] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:46:43.843 [2024-11-20 05:58:03.514995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:46:43.843 [2024-11-20 05:58:03.515012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:46:43.843 [2024-11-20 05:58:03.515019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:46:43.843 [2024-11-20 05:58:03.515026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:46:43.843 [2024-11-20 05:58:03.515033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:46:43.843 [2024-11-20 05:58:03.515040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:46:43.843 [2024-11-20 05:58:03.515048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:46:43.843 [2024-11-20 05:58:03.515055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:46:43.843 [2024-11-20 05:58:03.515062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:46:43.843 [2024-11-20 05:58:03.515068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:46:43.843 [2024-11-20 05:58:03.515105] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:46:43.843 [2024-11-20 05:58:03.515112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:43.843 [2024-11-20 05:58:03.515130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:46:43.843 [2024-11-20 05:58:03.515138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:46:43.844 [2024-11-20 05:58:03.515146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:46:43.844 [2024-11-20 05:58:03.515154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.515162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:46:43.844 [2024-11-20 05:58:03.515171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:46:43.844 [2024-11-20 05:58:03.515178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.568335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.568491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:46:43.844 [2024-11-20 05:58:03.568512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.204 ms 00:46:43.844 [2024-11-20 05:58:03.568521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.568644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.568661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:46:43.844 [2024-11-20 05:58:03.568671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:46:43.844 [2024-11-20 05:58:03.568680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.636789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.636862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:46:43.844 [2024-11-20 05:58:03.636884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.122 ms 00:46:43.844 [2024-11-20 05:58:03.636893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.636975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.636986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:46:43.844 [2024-11-20 05:58:03.636995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:46:43.844 [2024-11-20 05:58:03.637003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.637860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.637886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:46:43.844 [2024-11-20 05:58:03.637896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:46:43.844 [2024-11-20 05:58:03.637904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.638047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.638061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:46:43.844 [2024-11-20 05:58:03.638071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:46:43.844 [2024-11-20 05:58:03.638079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.663189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.663245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:46:43.844 [2024-11-20 05:58:03.663272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.132 ms 00:46:43.844 [2024-11-20 05:58:03.663282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.686466] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:46:43.844 [2024-11-20 05:58:03.686525] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:46:43.844 [2024-11-20 05:58:03.686544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.686556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:46:43.844 [2024-11-20 05:58:03.686569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.139 ms 00:46:43.844 [2024-11-20 05:58:03.686579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.723742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.723952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:46:43.844 [2024-11-20 05:58:03.723994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.150 ms 00:46:43.844 [2024-11-20 05:58:03.724004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:43.844 [2024-11-20 05:58:03.746766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:43.844 [2024-11-20 05:58:03.746834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:46:43.844 [2024-11-20 05:58:03.746850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.719 ms 00:46:43.844 [2024-11-20 05:58:03.746861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.104 [2024-11-20 05:58:03.770403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.104 [2024-11-20 05:58:03.770568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:46:44.104 [2024-11-20 05:58:03.770593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.498 ms 00:46:44.104 [2024-11-20 05:58:03.770605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.104 [2024-11-20 05:58:03.771774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.104 [2024-11-20 05:58:03.771818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:46:44.104 [2024-11-20 05:58:03.771831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:46:44.104 [2024-11-20 05:58:03.771840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.104 [2024-11-20 05:58:03.881184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.104 [2024-11-20 05:58:03.881267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:46:44.104 [2024-11-20 05:58:03.881284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.521 ms 00:46:44.104 [2024-11-20 05:58:03.881294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.104 [2024-11-20 05:58:03.894291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:46:44.104 [2024-11-20 05:58:03.899563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.104 [2024-11-20 05:58:03.899598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:46:44.104 [2024-11-20 05:58:03.899612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.213 ms 00:46:44.104 [2024-11-20 05:58:03.899621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.104 [2024-11-20 05:58:03.899775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.104 [2024-11-20 05:58:03.899788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:46:44.105 [2024-11-20 05:58:03.899798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:44.105 [2024-11-20 05:58:03.899822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.899914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.105 [2024-11-20 05:58:03.899949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:46:44.105 [2024-11-20 05:58:03.899959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:46:44.105 [2024-11-20 05:58:03.899967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.900006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.105 [2024-11-20 05:58:03.900021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:46:44.105 [2024-11-20 05:58:03.900031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:46:44.105 [2024-11-20 05:58:03.900040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.900079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:46:44.105 [2024-11-20 05:58:03.900090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.105 [2024-11-20 05:58:03.900098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:46:44.105 [2024-11-20 05:58:03.900107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:46:44.105 [2024-11-20 05:58:03.900115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.940032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.105 [2024-11-20 05:58:03.940099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:46:44.105 [2024-11-20 05:58:03.940115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.968 ms 00:46:44.105 [2024-11-20 05:58:03.940124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.940244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:46:44.105 [2024-11-20 05:58:03.940256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:46:44.105 [2024-11-20 05:58:03.940278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:46:44.105 [2024-11-20 05:58:03.940286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:46:44.105 [2024-11-20 05:58:03.942027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 476.140 ms, result 0 00:46:45.039  [2024-11-20T05:58:06.338Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T05:58:07.277Z] Copying: 55/1024 [MB] (30 MBps) [2024-11-20T05:58:08.213Z] Copying: 84/1024 [MB] (28 MBps) [2024-11-20T05:58:09.151Z] Copying: 113/1024 [MB] (29 MBps) [2024-11-20T05:58:10.086Z] Copying: 143/1024 [MB] (29 MBps) [2024-11-20T05:58:11.023Z] Copying: 173/1024 [MB] (30 MBps) [2024-11-20T05:58:11.955Z] Copying: 204/1024 [MB] (30 MBps) [2024-11-20T05:58:13.331Z] Copying: 233/1024 [MB] (29 MBps) [2024-11-20T05:58:14.268Z] Copying: 262/1024 [MB] (29 MBps) [2024-11-20T05:58:15.203Z] Copying: 293/1024 [MB] (30 MBps) [2024-11-20T05:58:16.139Z] Copying: 325/1024 [MB] (32 MBps) [2024-11-20T05:58:17.102Z] Copying: 356/1024 [MB] (31 MBps) [2024-11-20T05:58:18.054Z] Copying: 388/1024 [MB] (31 MBps) [2024-11-20T05:58:18.991Z] Copying: 420/1024 [MB] (32 MBps) [2024-11-20T05:58:19.925Z] Copying: 453/1024 [MB] (33 MBps) [2024-11-20T05:58:21.300Z] Copying: 483/1024 [MB] (30 MBps) [2024-11-20T05:58:22.241Z] Copying: 511/1024 [MB] (27 MBps) [2024-11-20T05:58:23.184Z] Copying: 539/1024 [MB] (28 MBps) [2024-11-20T05:58:24.145Z] Copying: 568/1024 [MB] (28 MBps) [2024-11-20T05:58:25.082Z] Copying: 597/1024 [MB] (29 MBps) [2024-11-20T05:58:26.018Z] Copying: 627/1024 [MB] (29 MBps) [2024-11-20T05:58:26.951Z] Copying: 655/1024 [MB] (28 MBps) [2024-11-20T05:58:28.328Z] Copying: 683/1024 [MB] (28 MBps) [2024-11-20T05:58:29.265Z] Copying: 712/1024 [MB] (28 MBps) [2024-11-20T05:58:30.204Z] Copying: 740/1024 [MB] (28 MBps) [2024-11-20T05:58:31.153Z] Copying: 769/1024 [MB] (28 MBps) [2024-11-20T05:58:32.105Z] Copying: 797/1024 [MB] (28 MBps) [2024-11-20T05:58:33.043Z] Copying: 826/1024 [MB] (28 MBps) [2024-11-20T05:58:33.981Z] Copying: 854/1024 [MB] (28 MBps) [2024-11-20T05:58:34.921Z] Copying: 883/1024 [MB] (28 MBps) [2024-11-20T05:58:36.302Z] Copying: 912/1024 [MB] (28 MBps) [2024-11-20T05:58:37.241Z] Copying: 940/1024 [MB] (28 MBps) [2024-11-20T05:58:38.191Z] Copying: 969/1024 [MB] (28 MBps) [2024-11-20T05:58:39.130Z] Copying: 998/1024 [MB] (29 MBps) [2024-11-20T05:58:39.698Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-20T05:58:39.698Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 05:58:39.584591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.584683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:19.779 [2024-11-20 05:58:39.584717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:19.779 [2024-11-20 05:58:39.584727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:19.779 [2024-11-20 05:58:39.585925] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:19.779 [2024-11-20 05:58:39.592572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.592610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:19.779 [2024-11-20 05:58:39.592623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.621 ms 00:47:19.779 [2024-11-20 05:58:39.592632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:19.779 [2024-11-20 05:58:39.613726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.613843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:19.779 [2024-11-20 05:58:39.613877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.387 ms 00:47:19.779 [2024-11-20 05:58:39.613916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:19.779 [2024-11-20 05:58:39.640326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.640376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:19.779 [2024-11-20 05:58:39.640394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.425 ms 00:47:19.779 [2024-11-20 05:58:39.640405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:19.779 [2024-11-20 05:58:39.645780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.645831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:19.779 [2024-11-20 05:58:39.645843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.350 ms 00:47:19.779 [2024-11-20 05:58:39.645851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:19.779 [2024-11-20 05:58:39.684490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:19.779 [2024-11-20 05:58:39.684545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:19.779 [2024-11-20 05:58:39.684560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.650 ms 00:47:19.779 [2024-11-20 05:58:39.684586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.706853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.706961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:20.041 [2024-11-20 05:58:39.706978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.259 ms 00:47:20.041 [2024-11-20 05:58:39.706988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.808776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.808886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:20.041 [2024-11-20 05:58:39.808916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.929 ms 00:47:20.041 [2024-11-20 05:58:39.808925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.847399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.847456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:20.041 [2024-11-20 05:58:39.847471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.524 ms 00:47:20.041 [2024-11-20 05:58:39.847480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.883509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.883577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:20.041 [2024-11-20 05:58:39.883597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.056 ms 00:47:20.041 [2024-11-20 05:58:39.883610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.918943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.919004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:20.041 [2024-11-20 05:58:39.919017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.352 ms 00:47:20.041 [2024-11-20 05:58:39.919024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.953971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.041 [2024-11-20 05:58:39.954018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:20.041 [2024-11-20 05:58:39.954030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.930 ms 00:47:20.041 [2024-11-20 05:58:39.954038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.041 [2024-11-20 05:58:39.954075] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:20.041 [2024-11-20 05:58:39.954090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107264 / 261120 wr_cnt: 1 state: open 00:47:20.041 [2024-11-20 05:58:39.954101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:20.041 [2024-11-20 05:58:39.954545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:20.042 [2024-11-20 05:58:39.954925] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:20.042 [2024-11-20 05:58:39.954933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32ba1394-50df-48b1-865d-b9cff1078769 00:47:20.042 [2024-11-20 05:58:39.954942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107264 00:47:20.042 [2024-11-20 05:58:39.954956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108224 00:47:20.042 [2024-11-20 05:58:39.954977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107264 00:47:20.042 [2024-11-20 05:58:39.954987] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:47:20.042 [2024-11-20 05:58:39.954994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:20.042 [2024-11-20 05:58:39.955003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:20.042 [2024-11-20 05:58:39.955011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:20.042 [2024-11-20 05:58:39.955018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:20.042 [2024-11-20 05:58:39.955025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:20.042 [2024-11-20 05:58:39.955033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.042 [2024-11-20 05:58:39.955041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:20.042 [2024-11-20 05:58:39.955049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:47:20.042 [2024-11-20 05:58:39.955057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.302 [2024-11-20 05:58:39.976665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.302 [2024-11-20 05:58:39.976711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:20.302 [2024-11-20 05:58:39.976724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.599 ms 00:47:20.302 [2024-11-20 05:58:39.976732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.302 [2024-11-20 05:58:39.977414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:20.302 [2024-11-20 05:58:39.977428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:20.302 [2024-11-20 05:58:39.977438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:47:20.303 [2024-11-20 05:58:39.977504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.303 [2024-11-20 05:58:40.032400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.303 [2024-11-20 05:58:40.032473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:20.303 [2024-11-20 05:58:40.032487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.303 [2024-11-20 05:58:40.032512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.303 [2024-11-20 05:58:40.032604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.303 [2024-11-20 05:58:40.032615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:20.303 [2024-11-20 05:58:40.032623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.303 [2024-11-20 05:58:40.032635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.303 [2024-11-20 05:58:40.032717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.303 [2024-11-20 05:58:40.032729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:20.303 [2024-11-20 05:58:40.032739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.303 [2024-11-20 05:58:40.032747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.303 [2024-11-20 05:58:40.032764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.303 [2024-11-20 05:58:40.032774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:20.303 [2024-11-20 05:58:40.032782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.303 [2024-11-20 05:58:40.032789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.303 [2024-11-20 05:58:40.167714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.303 [2024-11-20 05:58:40.167799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:20.303 [2024-11-20 05:58:40.167835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.303 [2024-11-20 05:58:40.167860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:20.562 [2024-11-20 05:58:40.275276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:20.562 [2024-11-20 05:58:40.275448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:20.562 [2024-11-20 05:58:40.275519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:20.562 [2024-11-20 05:58:40.275691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:20.562 [2024-11-20 05:58:40.275752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:20.562 [2024-11-20 05:58:40.275851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.275908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:20.562 [2024-11-20 05:58:40.275918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:20.562 [2024-11-20 05:58:40.275926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:20.562 [2024-11-20 05:58:40.275933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:20.562 [2024-11-20 05:58:40.276070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 694.352 ms, result 0 00:47:23.098 00:47:23.098 00:47:23.098 05:58:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:47:24.479 05:58:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:47:24.479 [2024-11-20 05:58:44.388584] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:47:24.479 [2024-11-20 05:58:44.388881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80593 ] 00:47:24.754 [2024-11-20 05:58:44.573829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.014 [2024-11-20 05:58:44.730500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:25.582 [2024-11-20 05:58:45.192974] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:25.582 [2024-11-20 05:58:45.193191] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:47:25.582 [2024-11-20 05:58:45.357709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.357946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:47:25.582 [2024-11-20 05:58:45.358001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:25.582 [2024-11-20 05:58:45.358028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.358160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.358203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:25.582 [2024-11-20 05:58:45.358240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:47:25.582 [2024-11-20 05:58:45.358265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.358327] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:47:25.582 [2024-11-20 05:58:45.359477] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:47:25.582 [2024-11-20 05:58:45.359572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.359596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:25.582 [2024-11-20 05:58:45.359619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:47:25.582 [2024-11-20 05:58:45.359639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.362225] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:47:25.582 [2024-11-20 05:58:45.383789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.383918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:47:25.582 [2024-11-20 05:58:45.383953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.608 ms 00:47:25.582 [2024-11-20 05:58:45.383974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.384105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.384144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:47:25.582 [2024-11-20 05:58:45.384171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:47:25.582 [2024-11-20 05:58:45.384192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.397745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.397922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:25.582 [2024-11-20 05:58:45.397956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.470 ms 00:47:25.582 [2024-11-20 05:58:45.397986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.398129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.398172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:25.582 [2024-11-20 05:58:45.398200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:47:25.582 [2024-11-20 05:58:45.398229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.398348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.398385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:47:25.582 [2024-11-20 05:58:45.398417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:25.582 [2024-11-20 05:58:45.398443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.582 [2024-11-20 05:58:45.398504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:47:25.582 [2024-11-20 05:58:45.404391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.582 [2024-11-20 05:58:45.404475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:25.583 [2024-11-20 05:58:45.404508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.914 ms 00:47:25.583 [2024-11-20 05:58:45.404533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.404614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.404653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:47:25.583 [2024-11-20 05:58:45.404685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:47:25.583 [2024-11-20 05:58:45.404711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.404765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:47:25.583 [2024-11-20 05:58:45.404829] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:47:25.583 [2024-11-20 05:58:45.404897] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:47:25.583 [2024-11-20 05:58:45.404947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:47:25.583 [2024-11-20 05:58:45.405093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:47:25.583 [2024-11-20 05:58:45.405136] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:47:25.583 [2024-11-20 05:58:45.405178] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:47:25.583 [2024-11-20 05:58:45.405229] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:47:25.583 [2024-11-20 05:58:45.405278] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:47:25.583 [2024-11-20 05:58:45.405326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:47:25.583 [2024-11-20 05:58:45.405357] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:47:25.583 [2024-11-20 05:58:45.405386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:47:25.583 [2024-11-20 05:58:45.405417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:47:25.583 [2024-11-20 05:58:45.405439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.405460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:47:25.583 [2024-11-20 05:58:45.405490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:47:25.583 [2024-11-20 05:58:45.405521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.405628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.405659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:47:25.583 [2024-11-20 05:58:45.405687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:47:25.583 [2024-11-20 05:58:45.405715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.405859] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:47:25.583 [2024-11-20 05:58:45.405902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:47:25.583 [2024-11-20 05:58:45.405932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:25.583 [2024-11-20 05:58:45.405961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.405988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:47:25.583 [2024-11-20 05:58:45.405997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:47:25.583 [2024-11-20 05:58:45.406020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:25.583 [2024-11-20 05:58:45.406035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:47:25.583 [2024-11-20 05:58:45.406042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:47:25.583 [2024-11-20 05:58:45.406050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:47:25.583 [2024-11-20 05:58:45.406057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:47:25.583 [2024-11-20 05:58:45.406065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:47:25.583 [2024-11-20 05:58:45.406087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:47:25.583 [2024-11-20 05:58:45.406102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:47:25.583 [2024-11-20 05:58:45.406123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:47:25.583 [2024-11-20 05:58:45.406145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:47:25.583 [2024-11-20 05:58:45.406166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:47:25.583 [2024-11-20 05:58:45.406186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:47:25.583 [2024-11-20 05:58:45.406206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:25.583 [2024-11-20 05:58:45.406219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:47:25.583 [2024-11-20 05:58:45.406225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:47:25.583 [2024-11-20 05:58:45.406231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:47:25.583 [2024-11-20 05:58:45.406238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:47:25.583 [2024-11-20 05:58:45.406244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:47:25.583 [2024-11-20 05:58:45.406250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:47:25.583 [2024-11-20 05:58:45.406264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:47:25.583 [2024-11-20 05:58:45.406271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406278] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:47:25.583 [2024-11-20 05:58:45.406287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:47:25.583 [2024-11-20 05:58:45.406295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:47:25.583 [2024-11-20 05:58:45.406311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:47:25.583 [2024-11-20 05:58:45.406318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:47:25.583 [2024-11-20 05:58:45.406325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:47:25.583 [2024-11-20 05:58:45.406332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:47:25.583 [2024-11-20 05:58:45.406339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:47:25.583 [2024-11-20 05:58:45.406346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:47:25.583 [2024-11-20 05:58:45.406356] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:47:25.583 [2024-11-20 05:58:45.406366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:47:25.583 [2024-11-20 05:58:45.406383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:47:25.583 [2024-11-20 05:58:45.406391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:47:25.583 [2024-11-20 05:58:45.406399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:47:25.583 [2024-11-20 05:58:45.406406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:47:25.583 [2024-11-20 05:58:45.406414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:47:25.583 [2024-11-20 05:58:45.406421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:47:25.583 [2024-11-20 05:58:45.406429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:47:25.583 [2024-11-20 05:58:45.406436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:47:25.583 [2024-11-20 05:58:45.406443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:47:25.583 [2024-11-20 05:58:45.406478] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:47:25.583 [2024-11-20 05:58:45.406492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:25.583 [2024-11-20 05:58:45.406509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:47:25.583 [2024-11-20 05:58:45.406517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:47:25.583 [2024-11-20 05:58:45.406525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:47:25.583 [2024-11-20 05:58:45.406534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.406543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:47:25.583 [2024-11-20 05:58:45.406552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:47:25.583 [2024-11-20 05:58:45.406559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.455474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.455653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:25.583 [2024-11-20 05:58:45.455690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.948 ms 00:47:25.583 [2024-11-20 05:58:45.455700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.583 [2024-11-20 05:58:45.455842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.583 [2024-11-20 05:58:45.455854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:47:25.583 [2024-11-20 05:58:45.455863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:47:25.583 [2024-11-20 05:58:45.455872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.521429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.521489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:25.843 [2024-11-20 05:58:45.521514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.559 ms 00:47:25.843 [2024-11-20 05:58:45.521523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.521605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.521615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:25.843 [2024-11-20 05:58:45.521629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:47:25.843 [2024-11-20 05:58:45.521637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.522532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.522552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:25.843 [2024-11-20 05:58:45.522562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:47:25.843 [2024-11-20 05:58:45.522571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.522735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.522748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:25.843 [2024-11-20 05:58:45.522759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:47:25.843 [2024-11-20 05:58:45.522772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.546202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.546261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:25.843 [2024-11-20 05:58:45.546281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.450 ms 00:47:25.843 [2024-11-20 05:58:45.546289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.567572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:47:25.843 [2024-11-20 05:58:45.567621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:47:25.843 [2024-11-20 05:58:45.567635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.567644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:47:25.843 [2024-11-20 05:58:45.567656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.229 ms 00:47:25.843 [2024-11-20 05:58:45.567664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.598520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.598571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:47:25.843 [2024-11-20 05:58:45.598585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.860 ms 00:47:25.843 [2024-11-20 05:58:45.598594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.617425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.617549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:47:25.843 [2024-11-20 05:58:45.617567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.810 ms 00:47:25.843 [2024-11-20 05:58:45.617575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.635541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.635651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:47:25.843 [2024-11-20 05:58:45.635668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:47:25.843 [2024-11-20 05:58:45.635677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.636693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.636734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:47:25.843 [2024-11-20 05:58:45.636746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:47:25.843 [2024-11-20 05:58:45.636760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:25.843 [2024-11-20 05:58:45.747888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:25.843 [2024-11-20 05:58:45.747982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:47:25.843 [2024-11-20 05:58:45.748012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.309 ms 00:47:25.843 [2024-11-20 05:58:45.748022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.767560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:47:26.103 [2024-11-20 05:58:45.773820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.773888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:47:26.103 [2024-11-20 05:58:45.773908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.697 ms 00:47:26.103 [2024-11-20 05:58:45.773919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.774075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.774090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:47:26.103 [2024-11-20 05:58:45.774102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:47:26.103 [2024-11-20 05:58:45.774117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.776626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.776677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:47:26.103 [2024-11-20 05:58:45.776690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:47:26.103 [2024-11-20 05:58:45.776699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.776754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.776765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:47:26.103 [2024-11-20 05:58:45.776776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:47:26.103 [2024-11-20 05:58:45.776786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.776867] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:47:26.103 [2024-11-20 05:58:45.776882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.776892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:47:26.103 [2024-11-20 05:58:45.776902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:47:26.103 [2024-11-20 05:58:45.776911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.827247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.827362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:47:26.103 [2024-11-20 05:58:45.827381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.400 ms 00:47:26.103 [2024-11-20 05:58:45.827422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.827580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:26.103 [2024-11-20 05:58:45.827593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:47:26.103 [2024-11-20 05:58:45.827604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:47:26.103 [2024-11-20 05:58:45.827614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:26.103 [2024-11-20 05:58:45.833179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 474.252 ms, result 0 00:47:27.482  [2024-11-20T05:58:48.342Z] Copying: 1008/1048576 [kB] (1008 kBps) [2024-11-20T05:58:49.278Z] Copying: 5592/1048576 [kB] (4584 kBps) [2024-11-20T05:58:50.351Z] Copying: 38/1024 [MB] (33 MBps) [2024-11-20T05:58:51.287Z] Copying: 73/1024 [MB] (34 MBps) [2024-11-20T05:58:52.234Z] Copying: 107/1024 [MB] (34 MBps) [2024-11-20T05:58:53.171Z] Copying: 144/1024 [MB] (36 MBps) [2024-11-20T05:58:54.111Z] Copying: 179/1024 [MB] (35 MBps) [2024-11-20T05:58:55.047Z] Copying: 216/1024 [MB] (36 MBps) [2024-11-20T05:58:56.419Z] Copying: 252/1024 [MB] (36 MBps) [2024-11-20T05:58:57.373Z] Copying: 289/1024 [MB] (36 MBps) [2024-11-20T05:58:58.306Z] Copying: 327/1024 [MB] (38 MBps) [2024-11-20T05:58:59.242Z] Copying: 364/1024 [MB] (36 MBps) [2024-11-20T05:59:00.176Z] Copying: 401/1024 [MB] (36 MBps) [2024-11-20T05:59:01.113Z] Copying: 436/1024 [MB] (35 MBps) [2024-11-20T05:59:02.050Z] Copying: 472/1024 [MB] (35 MBps) [2024-11-20T05:59:03.428Z] Copying: 511/1024 [MB] (39 MBps) [2024-11-20T05:59:04.368Z] Copying: 548/1024 [MB] (36 MBps) [2024-11-20T05:59:05.308Z] Copying: 584/1024 [MB] (36 MBps) [2024-11-20T05:59:06.245Z] Copying: 620/1024 [MB] (35 MBps) [2024-11-20T05:59:07.182Z] Copying: 655/1024 [MB] (35 MBps) [2024-11-20T05:59:08.121Z] Copying: 691/1024 [MB] (35 MBps) [2024-11-20T05:59:09.067Z] Copying: 726/1024 [MB] (35 MBps) [2024-11-20T05:59:10.007Z] Copying: 758/1024 [MB] (32 MBps) [2024-11-20T05:59:11.388Z] Copying: 792/1024 [MB] (34 MBps) [2024-11-20T05:59:12.329Z] Copying: 830/1024 [MB] (37 MBps) [2024-11-20T05:59:13.268Z] Copying: 866/1024 [MB] (36 MBps) [2024-11-20T05:59:14.258Z] Copying: 902/1024 [MB] (35 MBps) [2024-11-20T05:59:15.195Z] Copying: 938/1024 [MB] (35 MBps) [2024-11-20T05:59:16.132Z] Copying: 974/1024 [MB] (36 MBps) [2024-11-20T05:59:16.390Z] Copying: 1012/1024 [MB] (37 MBps) [2024-11-20T05:59:16.649Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 05:59:16.515694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.515929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:47:56.730 [2024-11-20 05:59:16.515975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:47:56.730 [2024-11-20 05:59:16.516004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.516058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:47:56.730 [2024-11-20 05:59:16.522145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.522323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:47:56.730 [2024-11-20 05:59:16.522364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.034 ms 00:47:56.730 [2024-11-20 05:59:16.522390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.522710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.522756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:47:56.730 [2024-11-20 05:59:16.522812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:47:56.730 [2024-11-20 05:59:16.522856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.537367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.537607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:47:56.730 [2024-11-20 05:59:16.537656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.477 ms 00:47:56.730 [2024-11-20 05:59:16.537686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.543720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.543941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:47:56.730 [2024-11-20 05:59:16.544000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.955 ms 00:47:56.730 [2024-11-20 05:59:16.544024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.594264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.730 [2024-11-20 05:59:16.594480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:47:56.730 [2024-11-20 05:59:16.594523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.234 ms 00:47:56.730 [2024-11-20 05:59:16.594549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.730 [2024-11-20 05:59:16.623782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.731 [2024-11-20 05:59:16.624008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:47:56.731 [2024-11-20 05:59:16.624052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.155 ms 00:47:56.731 [2024-11-20 05:59:16.624078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.731 [2024-11-20 05:59:16.625968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.731 [2024-11-20 05:59:16.626096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:47:56.731 [2024-11-20 05:59:16.626141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:47:56.731 [2024-11-20 05:59:16.626177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.991 [2024-11-20 05:59:16.678243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.991 [2024-11-20 05:59:16.678462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:47:56.991 [2024-11-20 05:59:16.678505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.087 ms 00:47:56.991 [2024-11-20 05:59:16.678530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.991 [2024-11-20 05:59:16.731653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.991 [2024-11-20 05:59:16.731876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:47:56.991 [2024-11-20 05:59:16.731965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.084 ms 00:47:56.991 [2024-11-20 05:59:16.732003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.991 [2024-11-20 05:59:16.783567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.991 [2024-11-20 05:59:16.783794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:47:56.991 [2024-11-20 05:59:16.783823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.541 ms 00:47:56.991 [2024-11-20 05:59:16.783833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.991 [2024-11-20 05:59:16.837631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.991 [2024-11-20 05:59:16.837831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:47:56.991 [2024-11-20 05:59:16.837855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.715 ms 00:47:56.991 [2024-11-20 05:59:16.837865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.991 [2024-11-20 05:59:16.837967] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:47:56.991 [2024-11-20 05:59:16.837990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:56.991 [2024-11-20 05:59:16.838003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:47:56.991 [2024-11-20 05:59:16.838013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:47:56.991 [2024-11-20 05:59:16.838277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:47:56.992 [2024-11-20 05:59:16.838934] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:47:56.992 [2024-11-20 05:59:16.838944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32ba1394-50df-48b1-865d-b9cff1078769 00:47:56.992 [2024-11-20 05:59:16.838965] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:47:56.992 [2024-11-20 05:59:16.838975] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157376 00:47:56.992 [2024-11-20 05:59:16.838983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155392 00:47:56.992 [2024-11-20 05:59:16.838998] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:47:56.992 [2024-11-20 05:59:16.839007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:47:56.992 [2024-11-20 05:59:16.839017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:47:56.992 [2024-11-20 05:59:16.839025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:47:56.992 [2024-11-20 05:59:16.839055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:47:56.992 [2024-11-20 05:59:16.839063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:47:56.992 [2024-11-20 05:59:16.839090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.992 [2024-11-20 05:59:16.839100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:47:56.992 [2024-11-20 05:59:16.839111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:47:56.992 [2024-11-20 05:59:16.839121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.992 [2024-11-20 05:59:16.865164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.992 [2024-11-20 05:59:16.865347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:47:56.992 [2024-11-20 05:59:16.865367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.996 ms 00:47:56.992 [2024-11-20 05:59:16.865376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:56.992 [2024-11-20 05:59:16.866189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:47:56.992 [2024-11-20 05:59:16.866215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:47:56.992 [2024-11-20 05:59:16.866225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:47:56.992 [2024-11-20 05:59:16.866234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.252 [2024-11-20 05:59:16.931342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.252 [2024-11-20 05:59:16.931428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:47:57.252 [2024-11-20 05:59:16.931446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.252 [2024-11-20 05:59:16.931456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.252 [2024-11-20 05:59:16.931555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.252 [2024-11-20 05:59:16.931565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:47:57.252 [2024-11-20 05:59:16.931575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.252 [2024-11-20 05:59:16.931584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.252 [2024-11-20 05:59:16.931701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.252 [2024-11-20 05:59:16.931717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:47:57.252 [2024-11-20 05:59:16.931728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.252 [2024-11-20 05:59:16.931737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.252 [2024-11-20 05:59:16.931758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.252 [2024-11-20 05:59:16.931768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:47:57.252 [2024-11-20 05:59:16.931778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.252 [2024-11-20 05:59:16.931787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.252 [2024-11-20 05:59:17.092752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.252 [2024-11-20 05:59:17.092861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:47:57.252 [2024-11-20 05:59:17.092879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.252 [2024-11-20 05:59:17.092890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:47:57.512 [2024-11-20 05:59:17.225133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:47:57.512 [2024-11-20 05:59:17.225302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:47:57.512 [2024-11-20 05:59:17.225377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:47:57.512 [2024-11-20 05:59:17.225556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:47:57.512 [2024-11-20 05:59:17.225645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:47:57.512 [2024-11-20 05:59:17.225724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:47:57.512 [2024-11-20 05:59:17.225798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:47:57.512 [2024-11-20 05:59:17.225837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:47:57.512 [2024-11-20 05:59:17.225846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:47:57.512 [2024-11-20 05:59:17.225998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 711.636 ms, result 0 00:47:58.984 00:47:58.984 00:47:58.984 05:59:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:00.890 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:48:00.890 05:59:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:00.890 [2024-11-20 05:59:20.803993] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:00.890 [2024-11-20 05:59:20.804126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80958 ] 00:48:01.149 [2024-11-20 05:59:20.985369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:01.409 [2024-11-20 05:59:21.145062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:01.978 [2024-11-20 05:59:21.598222] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:01.978 [2024-11-20 05:59:21.598329] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:48:01.978 [2024-11-20 05:59:21.763119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.763200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:48:01.978 [2024-11-20 05:59:21.763224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:01.978 [2024-11-20 05:59:21.763234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.763305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.763317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:01.978 [2024-11-20 05:59:21.763331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:48:01.978 [2024-11-20 05:59:21.763340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.763363] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:48:01.978 [2024-11-20 05:59:21.764487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:48:01.978 [2024-11-20 05:59:21.764517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.764526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:01.978 [2024-11-20 05:59:21.764535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:48:01.978 [2024-11-20 05:59:21.764543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.767163] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:48:01.978 [2024-11-20 05:59:21.789319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.789436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:48:01.978 [2024-11-20 05:59:21.789462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.193 ms 00:48:01.978 [2024-11-20 05:59:21.789476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.789674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.789694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:48:01.978 [2024-11-20 05:59:21.789712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:48:01.978 [2024-11-20 05:59:21.789726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.805073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.805177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:01.978 [2024-11-20 05:59:21.805193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.178 ms 00:48:01.978 [2024-11-20 05:59:21.805213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.805348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.805366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:01.978 [2024-11-20 05:59:21.805377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:48:01.978 [2024-11-20 05:59:21.805386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.805508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.978 [2024-11-20 05:59:21.805521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:48:01.978 [2024-11-20 05:59:21.805531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:48:01.978 [2024-11-20 05:59:21.805541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.978 [2024-11-20 05:59:21.805578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:48:01.978 [2024-11-20 05:59:21.811831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.979 [2024-11-20 05:59:21.811871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:01.979 [2024-11-20 05:59:21.811883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.280 ms 00:48:01.979 [2024-11-20 05:59:21.811896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.979 [2024-11-20 05:59:21.811937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.979 [2024-11-20 05:59:21.811948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:48:01.979 [2024-11-20 05:59:21.811957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:48:01.979 [2024-11-20 05:59:21.811965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.979 [2024-11-20 05:59:21.812015] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:48:01.979 [2024-11-20 05:59:21.812042] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:48:01.979 [2024-11-20 05:59:21.812081] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:48:01.979 [2024-11-20 05:59:21.812102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:48:01.979 [2024-11-20 05:59:21.812198] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:48:01.979 [2024-11-20 05:59:21.812209] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:48:01.979 [2024-11-20 05:59:21.812221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:48:01.979 [2024-11-20 05:59:21.812233] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812243] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:48:01.979 [2024-11-20 05:59:21.812261] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:48:01.979 [2024-11-20 05:59:21.812270] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:48:01.979 [2024-11-20 05:59:21.812283] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:48:01.979 [2024-11-20 05:59:21.812292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.979 [2024-11-20 05:59:21.812301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:48:01.979 [2024-11-20 05:59:21.812309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:48:01.979 [2024-11-20 05:59:21.812317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.979 [2024-11-20 05:59:21.812390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.979 [2024-11-20 05:59:21.812404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:48:01.979 [2024-11-20 05:59:21.812412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:48:01.979 [2024-11-20 05:59:21.812420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.979 [2024-11-20 05:59:21.812536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:48:01.979 [2024-11-20 05:59:21.812557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:48:01.979 [2024-11-20 05:59:21.812567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:48:01.979 [2024-11-20 05:59:21.812590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:48:01.979 [2024-11-20 05:59:21.812612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:01.979 [2024-11-20 05:59:21.812629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:48:01.979 [2024-11-20 05:59:21.812636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:48:01.979 [2024-11-20 05:59:21.812644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:48:01.979 [2024-11-20 05:59:21.812651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:48:01.979 [2024-11-20 05:59:21.812659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:48:01.979 [2024-11-20 05:59:21.812678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:48:01.979 [2024-11-20 05:59:21.812692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:48:01.979 [2024-11-20 05:59:21.812715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:48:01.979 [2024-11-20 05:59:21.812737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:48:01.979 [2024-11-20 05:59:21.812757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:48:01.979 [2024-11-20 05:59:21.812778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:48:01.979 [2024-11-20 05:59:21.812799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:01.979 [2024-11-20 05:59:21.812825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:48:01.979 [2024-11-20 05:59:21.812832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:48:01.979 [2024-11-20 05:59:21.812839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:48:01.979 [2024-11-20 05:59:21.812846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:48:01.979 [2024-11-20 05:59:21.812853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:48:01.979 [2024-11-20 05:59:21.812860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:48:01.979 [2024-11-20 05:59:21.812874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:48:01.979 [2024-11-20 05:59:21.812881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812887] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:48:01.979 [2024-11-20 05:59:21.812896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:48:01.979 [2024-11-20 05:59:21.812905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:48:01.979 [2024-11-20 05:59:21.812912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:48:01.979 [2024-11-20 05:59:21.812921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:48:01.979 [2024-11-20 05:59:21.812928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:48:01.979 [2024-11-20 05:59:21.812935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:48:01.979 [2024-11-20 05:59:21.812942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:48:01.979 [2024-11-20 05:59:21.812949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:48:01.979 [2024-11-20 05:59:21.812956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:48:01.980 [2024-11-20 05:59:21.812966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:48:01.980 [2024-11-20 05:59:21.812976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.812985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:48:01.980 [2024-11-20 05:59:21.812992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:48:01.980 [2024-11-20 05:59:21.813000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:48:01.980 [2024-11-20 05:59:21.813007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:48:01.980 [2024-11-20 05:59:21.813016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:48:01.980 [2024-11-20 05:59:21.813023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:48:01.980 [2024-11-20 05:59:21.813031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:48:01.980 [2024-11-20 05:59:21.813038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:48:01.980 [2024-11-20 05:59:21.813045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:48:01.980 [2024-11-20 05:59:21.813052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:48:01.980 [2024-11-20 05:59:21.813090] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:48:01.980 [2024-11-20 05:59:21.813103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:01.980 [2024-11-20 05:59:21.813120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:48:01.980 [2024-11-20 05:59:21.813128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:48:01.980 [2024-11-20 05:59:21.813135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:48:01.980 [2024-11-20 05:59:21.813144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.980 [2024-11-20 05:59:21.813152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:48:01.980 [2024-11-20 05:59:21.813160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:48:01.980 [2024-11-20 05:59:21.813168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.980 [2024-11-20 05:59:21.869362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.980 [2024-11-20 05:59:21.869439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:01.980 [2024-11-20 05:59:21.869456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.242 ms 00:48:01.980 [2024-11-20 05:59:21.869467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:01.980 [2024-11-20 05:59:21.869639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:01.980 [2024-11-20 05:59:21.869659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:48:01.980 [2024-11-20 05:59:21.869671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:48:01.980 [2024-11-20 05:59:21.869682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.938196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.938389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:02.239 [2024-11-20 05:59:21.938411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.508 ms 00:48:02.239 [2024-11-20 05:59:21.938422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.938511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.938522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:02.239 [2024-11-20 05:59:21.938540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:48:02.239 [2024-11-20 05:59:21.938549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.939437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.939480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:02.239 [2024-11-20 05:59:21.939491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:48:02.239 [2024-11-20 05:59:21.939499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.939651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.939679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:02.239 [2024-11-20 05:59:21.939688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:48:02.239 [2024-11-20 05:59:21.939703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.964767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.964951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:02.239 [2024-11-20 05:59:21.964980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.085 ms 00:48:02.239 [2024-11-20 05:59:21.964990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.239 [2024-11-20 05:59:21.989324] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:48:02.239 [2024-11-20 05:59:21.989403] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:48:02.239 [2024-11-20 05:59:21.989421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.239 [2024-11-20 05:59:21.989431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:48:02.240 [2024-11-20 05:59:21.989446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.288 ms 00:48:02.240 [2024-11-20 05:59:21.989454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.240 [2024-11-20 05:59:22.024476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.240 [2024-11-20 05:59:22.024566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:48:02.240 [2024-11-20 05:59:22.024583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.950 ms 00:48:02.240 [2024-11-20 05:59:22.024592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.240 [2024-11-20 05:59:22.048045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.240 [2024-11-20 05:59:22.048122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:48:02.240 [2024-11-20 05:59:22.048139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.391 ms 00:48:02.240 [2024-11-20 05:59:22.048147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.240 [2024-11-20 05:59:22.068699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.240 [2024-11-20 05:59:22.068884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:48:02.240 [2024-11-20 05:59:22.068904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.505 ms 00:48:02.240 [2024-11-20 05:59:22.068912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.240 [2024-11-20 05:59:22.069891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.240 [2024-11-20 05:59:22.069920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:48:02.240 [2024-11-20 05:59:22.069931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:48:02.240 [2024-11-20 05:59:22.069943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.178618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.178822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:48:02.500 [2024-11-20 05:59:22.178853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.843 ms 00:48:02.500 [2024-11-20 05:59:22.178862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.195228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:48:02.500 [2024-11-20 05:59:22.200862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.200920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:48:02.500 [2024-11-20 05:59:22.200937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.926 ms 00:48:02.500 [2024-11-20 05:59:22.200948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.201105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.201119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:48:02.500 [2024-11-20 05:59:22.201130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:48:02.500 [2024-11-20 05:59:22.201146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.202604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.202637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:48:02.500 [2024-11-20 05:59:22.202648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:48:02.500 [2024-11-20 05:59:22.202657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.202694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.202706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:48:02.500 [2024-11-20 05:59:22.202715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:48:02.500 [2024-11-20 05:59:22.202724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.202770] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:48:02.500 [2024-11-20 05:59:22.202782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.202791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:48:02.500 [2024-11-20 05:59:22.202815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:48:02.500 [2024-11-20 05:59:22.202825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.249244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.249431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:48:02.500 [2024-11-20 05:59:22.249452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.483 ms 00:48:02.500 [2024-11-20 05:59:22.249474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.249637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:02.500 [2024-11-20 05:59:22.249650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:48:02.500 [2024-11-20 05:59:22.249661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:48:02.500 [2024-11-20 05:59:22.249670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:02.500 [2024-11-20 05:59:22.251463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 488.637 ms, result 0 00:48:03.883  [2024-11-20T05:59:24.743Z] Copying: 33/1024 [MB] (33 MBps) [2024-11-20T05:59:25.679Z] Copying: 68/1024 [MB] (35 MBps) [2024-11-20T05:59:26.612Z] Copying: 101/1024 [MB] (32 MBps) [2024-11-20T05:59:27.548Z] Copying: 134/1024 [MB] (33 MBps) [2024-11-20T05:59:28.484Z] Copying: 167/1024 [MB] (32 MBps) [2024-11-20T05:59:29.419Z] Copying: 201/1024 [MB] (34 MBps) [2024-11-20T05:59:30.793Z] Copying: 231/1024 [MB] (29 MBps) [2024-11-20T05:59:31.729Z] Copying: 261/1024 [MB] (29 MBps) [2024-11-20T05:59:32.672Z] Copying: 290/1024 [MB] (29 MBps) [2024-11-20T05:59:33.609Z] Copying: 318/1024 [MB] (28 MBps) [2024-11-20T05:59:34.544Z] Copying: 345/1024 [MB] (27 MBps) [2024-11-20T05:59:35.478Z] Copying: 373/1024 [MB] (27 MBps) [2024-11-20T05:59:36.415Z] Copying: 400/1024 [MB] (27 MBps) [2024-11-20T05:59:37.793Z] Copying: 428/1024 [MB] (27 MBps) [2024-11-20T05:59:38.732Z] Copying: 456/1024 [MB] (28 MBps) [2024-11-20T05:59:39.668Z] Copying: 484/1024 [MB] (27 MBps) [2024-11-20T05:59:40.602Z] Copying: 514/1024 [MB] (29 MBps) [2024-11-20T05:59:41.537Z] Copying: 544/1024 [MB] (30 MBps) [2024-11-20T05:59:42.470Z] Copying: 577/1024 [MB] (32 MBps) [2024-11-20T05:59:43.404Z] Copying: 608/1024 [MB] (31 MBps) [2024-11-20T05:59:44.778Z] Copying: 638/1024 [MB] (30 MBps) [2024-11-20T05:59:45.715Z] Copying: 669/1024 [MB] (31 MBps) [2024-11-20T05:59:46.653Z] Copying: 701/1024 [MB] (31 MBps) [2024-11-20T05:59:47.607Z] Copying: 730/1024 [MB] (29 MBps) [2024-11-20T05:59:48.540Z] Copying: 763/1024 [MB] (32 MBps) [2024-11-20T05:59:49.477Z] Copying: 797/1024 [MB] (34 MBps) [2024-11-20T05:59:50.415Z] Copying: 830/1024 [MB] (32 MBps) [2024-11-20T05:59:51.786Z] Copying: 864/1024 [MB] (33 MBps) [2024-11-20T05:59:52.722Z] Copying: 896/1024 [MB] (32 MBps) [2024-11-20T05:59:53.656Z] Copying: 929/1024 [MB] (33 MBps) [2024-11-20T05:59:54.591Z] Copying: 964/1024 [MB] (34 MBps) [2024-11-20T05:59:55.524Z] Copying: 997/1024 [MB] (33 MBps) [2024-11-20T05:59:55.524Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 05:59:55.454092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.454342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:48:35.605 [2024-11-20 05:59:55.454417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:48:35.605 [2024-11-20 05:59:55.454482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.605 [2024-11-20 05:59:55.454586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:48:35.605 [2024-11-20 05:59:55.464365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.464434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:48:35.605 [2024-11-20 05:59:55.464466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.754 ms 00:48:35.605 [2024-11-20 05:59:55.464477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.605 [2024-11-20 05:59:55.464893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.464917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:48:35.605 [2024-11-20 05:59:55.464931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:48:35.605 [2024-11-20 05:59:55.464941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.605 [2024-11-20 05:59:55.468697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.468724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:48:35.605 [2024-11-20 05:59:55.468736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.746 ms 00:48:35.605 [2024-11-20 05:59:55.468748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.605 [2024-11-20 05:59:55.475154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.475199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:48:35.605 [2024-11-20 05:59:55.475210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.388 ms 00:48:35.605 [2024-11-20 05:59:55.475218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.605 [2024-11-20 05:59:55.519092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.605 [2024-11-20 05:59:55.519166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:48:35.605 [2024-11-20 05:59:55.519181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.864 ms 00:48:35.605 [2024-11-20 05:59:55.519190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.864 [2024-11-20 05:59:55.542556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.542651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:48:35.865 [2024-11-20 05:59:55.542668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.360 ms 00:48:35.865 [2024-11-20 05:59:55.542678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.544736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.544788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:48:35.865 [2024-11-20 05:59:55.544801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:48:35.865 [2024-11-20 05:59:55.544826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.585114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.585183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:48:35.865 [2024-11-20 05:59:55.585196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.343 ms 00:48:35.865 [2024-11-20 05:59:55.585205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.626557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.626654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:48:35.865 [2024-11-20 05:59:55.626669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.386 ms 00:48:35.865 [2024-11-20 05:59:55.626677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.668304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.668483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:48:35.865 [2024-11-20 05:59:55.668502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.658 ms 00:48:35.865 [2024-11-20 05:59:55.668511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.709787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.865 [2024-11-20 05:59:55.709964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:48:35.865 [2024-11-20 05:59:55.709984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.245 ms 00:48:35.865 [2024-11-20 05:59:55.709993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.865 [2024-11-20 05:59:55.710037] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:48:35.865 [2024-11-20 05:59:55.710055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:48:35.865 [2024-11-20 05:59:55.710077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:48:35.865 [2024-11-20 05:59:55.710087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:48:35.865 [2024-11-20 05:59:55.710620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:48:35.866 [2024-11-20 05:59:55.710909] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:48:35.866 [2024-11-20 05:59:55.710922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32ba1394-50df-48b1-865d-b9cff1078769 00:48:35.866 [2024-11-20 05:59:55.710931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:48:35.866 [2024-11-20 05:59:55.710939] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:48:35.866 [2024-11-20 05:59:55.710958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:48:35.866 [2024-11-20 05:59:55.710967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:48:35.866 [2024-11-20 05:59:55.710975] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:48:35.866 [2024-11-20 05:59:55.710983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:48:35.866 [2024-11-20 05:59:55.711009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:48:35.866 [2024-11-20 05:59:55.711016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:48:35.866 [2024-11-20 05:59:55.711023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:48:35.866 [2024-11-20 05:59:55.711031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.866 [2024-11-20 05:59:55.711040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:48:35.866 [2024-11-20 05:59:55.711049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:48:35.866 [2024-11-20 05:59:55.711057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.866 [2024-11-20 05:59:55.733424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.866 [2024-11-20 05:59:55.733487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:48:35.866 [2024-11-20 05:59:55.733502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.353 ms 00:48:35.866 [2024-11-20 05:59:55.733517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:35.866 [2024-11-20 05:59:55.734177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:48:35.866 [2024-11-20 05:59:55.734206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:48:35.866 [2024-11-20 05:59:55.734224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:48:35.866 [2024-11-20 05:59:55.734233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.125 [2024-11-20 05:59:55.791338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.125 [2024-11-20 05:59:55.791534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:48:36.125 [2024-11-20 05:59:55.791556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.125 [2024-11-20 05:59:55.791567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.125 [2024-11-20 05:59:55.791673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.125 [2024-11-20 05:59:55.791684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:48:36.125 [2024-11-20 05:59:55.791700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.125 [2024-11-20 05:59:55.791708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.125 [2024-11-20 05:59:55.791853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.125 [2024-11-20 05:59:55.791868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:48:36.125 [2024-11-20 05:59:55.791877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.125 [2024-11-20 05:59:55.791885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.125 [2024-11-20 05:59:55.791906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.125 [2024-11-20 05:59:55.791916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:48:36.125 [2024-11-20 05:59:55.791924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.125 [2024-11-20 05:59:55.791937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.125 [2024-11-20 05:59:55.931756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.125 [2024-11-20 05:59:55.932000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:48:36.125 [2024-11-20 05:59:55.932021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.125 [2024-11-20 05:59:55.932031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.046676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.046762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:48:36.383 [2024-11-20 05:59:56.046788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.046798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.046952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.046964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:48:36.383 [2024-11-20 05:59:56.046973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.046981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.047037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:48:36.383 [2024-11-20 05:59:56.047045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.047053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.047190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:48:36.383 [2024-11-20 05:59:56.047199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.047207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.047256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:48:36.383 [2024-11-20 05:59:56.047265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.047272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.047330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:48:36.383 [2024-11-20 05:59:56.047338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.047346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:48:36.383 [2024-11-20 05:59:56.047404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:48:36.383 [2024-11-20 05:59:56.047412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:48:36.383 [2024-11-20 05:59:56.047420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:48:36.383 [2024-11-20 05:59:56.047560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 594.595 ms, result 0 00:48:37.755 00:48:37.755 00:48:37.755 05:59:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:48:39.671 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:48:39.671 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:48:39.671 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:48:39.671 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:48:39.671 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79329 00:48:39.930 Process with pid 79329 is not found 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79329 ']' 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 79329 00:48:39.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79329) - No such process 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 79329 is not found' 00:48:39.930 05:59:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:48:40.188 Remove shared memory files 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:48:40.188 ************************************ 00:48:40.188 END TEST ftl_dirty_shutdown 00:48:40.188 ************************************ 00:48:40.188 00:48:40.188 real 3m14.866s 00:48:40.188 user 3m39.022s 00:48:40.188 sys 0m31.662s 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:48:40.188 06:00:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:40.446 06:00:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:48:40.447 06:00:00 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:48:40.447 06:00:00 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:48:40.447 06:00:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:48:40.447 ************************************ 00:48:40.447 START TEST ftl_upgrade_shutdown 00:48:40.447 ************************************ 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:48:40.447 * Looking for test storage... 00:48:40.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:48:40.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:40.447 --rc genhtml_branch_coverage=1 00:48:40.447 --rc genhtml_function_coverage=1 00:48:40.447 --rc genhtml_legend=1 00:48:40.447 --rc geninfo_all_blocks=1 00:48:40.447 --rc geninfo_unexecuted_blocks=1 00:48:40.447 00:48:40.447 ' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:48:40.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:40.447 --rc genhtml_branch_coverage=1 00:48:40.447 --rc genhtml_function_coverage=1 00:48:40.447 --rc genhtml_legend=1 00:48:40.447 --rc geninfo_all_blocks=1 00:48:40.447 --rc geninfo_unexecuted_blocks=1 00:48:40.447 00:48:40.447 ' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:48:40.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:40.447 --rc genhtml_branch_coverage=1 00:48:40.447 --rc genhtml_function_coverage=1 00:48:40.447 --rc genhtml_legend=1 00:48:40.447 --rc geninfo_all_blocks=1 00:48:40.447 --rc geninfo_unexecuted_blocks=1 00:48:40.447 00:48:40.447 ' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:48:40.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:48:40.447 --rc genhtml_branch_coverage=1 00:48:40.447 --rc genhtml_function_coverage=1 00:48:40.447 --rc genhtml_legend=1 00:48:40.447 --rc geninfo_all_blocks=1 00:48:40.447 --rc geninfo_unexecuted_blocks=1 00:48:40.447 00:48:40.447 ' 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:48:40.447 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:48:40.705 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81416 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81416 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81416 ']' 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:40.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:40.706 06:00:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:40.706 [2024-11-20 06:00:00.512500] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:40.706 [2024-11-20 06:00:00.512817] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81416 ] 00:48:40.964 [2024-11-20 06:00:00.707929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:40.964 [2024-11-20 06:00:00.874753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:48:42.337 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:48:42.594 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:48:42.852 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:48:43.109 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:48:43.109 { 00:48:43.109 "name": "basen1", 00:48:43.109 "aliases": [ 00:48:43.109 "95e84b39-9877-4121-9f93-49ae8651f9f4" 00:48:43.109 ], 00:48:43.109 "product_name": "NVMe disk", 00:48:43.109 "block_size": 4096, 00:48:43.109 "num_blocks": 1310720, 00:48:43.109 "uuid": "95e84b39-9877-4121-9f93-49ae8651f9f4", 00:48:43.109 "numa_id": -1, 00:48:43.109 "assigned_rate_limits": { 00:48:43.109 "rw_ios_per_sec": 0, 00:48:43.109 "rw_mbytes_per_sec": 0, 00:48:43.109 "r_mbytes_per_sec": 0, 00:48:43.109 "w_mbytes_per_sec": 0 00:48:43.109 }, 00:48:43.109 "claimed": true, 00:48:43.109 "claim_type": "read_many_write_one", 00:48:43.109 "zoned": false, 00:48:43.109 "supported_io_types": { 00:48:43.109 "read": true, 00:48:43.109 "write": true, 00:48:43.109 "unmap": true, 00:48:43.109 "flush": true, 00:48:43.109 "reset": true, 00:48:43.109 "nvme_admin": true, 00:48:43.109 "nvme_io": true, 00:48:43.109 "nvme_io_md": false, 00:48:43.109 "write_zeroes": true, 00:48:43.109 "zcopy": false, 00:48:43.109 "get_zone_info": false, 00:48:43.109 "zone_management": false, 00:48:43.109 "zone_append": false, 00:48:43.109 "compare": true, 00:48:43.109 "compare_and_write": false, 00:48:43.109 "abort": true, 00:48:43.109 "seek_hole": false, 00:48:43.109 "seek_data": false, 00:48:43.109 "copy": true, 00:48:43.109 "nvme_iov_md": false 00:48:43.109 }, 00:48:43.109 "driver_specific": { 00:48:43.109 "nvme": [ 00:48:43.109 { 00:48:43.109 "pci_address": "0000:00:11.0", 00:48:43.109 "trid": { 00:48:43.109 "trtype": "PCIe", 00:48:43.109 "traddr": "0000:00:11.0" 00:48:43.109 }, 00:48:43.109 "ctrlr_data": { 00:48:43.109 "cntlid": 0, 00:48:43.109 "vendor_id": "0x1b36", 00:48:43.109 "model_number": "QEMU NVMe Ctrl", 00:48:43.109 "serial_number": "12341", 00:48:43.109 "firmware_revision": "8.0.0", 00:48:43.109 "subnqn": "nqn.2019-08.org.qemu:12341", 00:48:43.109 "oacs": { 00:48:43.109 "security": 0, 00:48:43.109 "format": 1, 00:48:43.109 "firmware": 0, 00:48:43.109 "ns_manage": 1 00:48:43.109 }, 00:48:43.109 "multi_ctrlr": false, 00:48:43.109 "ana_reporting": false 00:48:43.109 }, 00:48:43.109 "vs": { 00:48:43.109 "nvme_version": "1.4" 00:48:43.109 }, 00:48:43.109 "ns_data": { 00:48:43.109 "id": 1, 00:48:43.110 "can_share": false 00:48:43.110 } 00:48:43.110 } 00:48:43.110 ], 00:48:43.110 "mp_policy": "active_passive" 00:48:43.110 } 00:48:43.110 } 00:48:43.110 ]' 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:43.110 06:00:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:48:43.367 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=872a227a-9649-4c34-bf07-6c8d809a2c56 00:48:43.367 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:48:43.367 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 872a227a-9649-4c34-bf07-6c8d809a2c56 00:48:43.623 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:48:43.879 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0c17aa9a-51ad-42aa-9424-e68c8a37685b 00:48:43.879 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0c17aa9a-51ad-42aa-9424-e68c8a37685b 00:48:44.136 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=9960146d-df18-4f02-825c-ed1463a059c8 00:48:44.136 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 9960146d-df18-4f02-825c-ed1463a059c8 ]] 00:48:44.136 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 9960146d-df18-4f02-825c-ed1463a059c8 5120 00:48:44.136 06:00:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=9960146d-df18-4f02-825c-ed1463a059c8 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9960146d-df18-4f02-825c-ed1463a059c8 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=9960146d-df18-4f02-825c-ed1463a059c8 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:48:44.136 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9960146d-df18-4f02-825c-ed1463a059c8 00:48:44.394 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:48:44.394 { 00:48:44.394 "name": "9960146d-df18-4f02-825c-ed1463a059c8", 00:48:44.394 "aliases": [ 00:48:44.394 "lvs/basen1p0" 00:48:44.394 ], 00:48:44.394 "product_name": "Logical Volume", 00:48:44.394 "block_size": 4096, 00:48:44.394 "num_blocks": 5242880, 00:48:44.394 "uuid": "9960146d-df18-4f02-825c-ed1463a059c8", 00:48:44.394 "assigned_rate_limits": { 00:48:44.394 "rw_ios_per_sec": 0, 00:48:44.394 "rw_mbytes_per_sec": 0, 00:48:44.394 "r_mbytes_per_sec": 0, 00:48:44.394 "w_mbytes_per_sec": 0 00:48:44.394 }, 00:48:44.394 "claimed": false, 00:48:44.394 "zoned": false, 00:48:44.394 "supported_io_types": { 00:48:44.394 "read": true, 00:48:44.394 "write": true, 00:48:44.394 "unmap": true, 00:48:44.394 "flush": false, 00:48:44.394 "reset": true, 00:48:44.394 "nvme_admin": false, 00:48:44.394 "nvme_io": false, 00:48:44.394 "nvme_io_md": false, 00:48:44.394 "write_zeroes": true, 00:48:44.394 "zcopy": false, 00:48:44.394 "get_zone_info": false, 00:48:44.394 "zone_management": false, 00:48:44.394 "zone_append": false, 00:48:44.394 "compare": false, 00:48:44.394 "compare_and_write": false, 00:48:44.394 "abort": false, 00:48:44.394 "seek_hole": true, 00:48:44.394 "seek_data": true, 00:48:44.394 "copy": false, 00:48:44.394 "nvme_iov_md": false 00:48:44.394 }, 00:48:44.394 "driver_specific": { 00:48:44.394 "lvol": { 00:48:44.394 "lvol_store_uuid": "0c17aa9a-51ad-42aa-9424-e68c8a37685b", 00:48:44.394 "base_bdev": "basen1", 00:48:44.394 "thin_provision": true, 00:48:44.394 "num_allocated_clusters": 0, 00:48:44.394 "snapshot": false, 00:48:44.394 "clone": false, 00:48:44.394 "esnap_clone": false 00:48:44.394 } 00:48:44.394 } 00:48:44.394 } 00:48:44.394 ]' 00:48:44.394 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:48:44.394 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:48:44.394 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:48:44.651 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:48:44.908 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:48:44.908 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:48:44.908 06:00:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:48:45.165 06:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:48:45.165 06:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:48:45.165 06:00:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9960146d-df18-4f02-825c-ed1463a059c8 -c cachen1p0 --l2p_dram_limit 2 00:48:45.424 [2024-11-20 06:00:05.235598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.235687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:48:45.424 [2024-11-20 06:00:05.235708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:48:45.424 [2024-11-20 06:00:05.235734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.235838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.235852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:48:45.424 [2024-11-20 06:00:05.235865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.079 ms 00:48:45.424 [2024-11-20 06:00:05.235874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.235920] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:48:45.424 [2024-11-20 06:00:05.237165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:48:45.424 [2024-11-20 06:00:05.237209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.237220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:48:45.424 [2024-11-20 06:00:05.237232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.293 ms 00:48:45.424 [2024-11-20 06:00:05.237242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.237368] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 04afc1a3-e019-45a2-a482-026a8c70763e 00:48:45.424 [2024-11-20 06:00:05.239993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.240042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:48:45.424 [2024-11-20 06:00:05.240057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:48:45.424 [2024-11-20 06:00:05.240069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.254568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.254645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:48:45.424 [2024-11-20 06:00:05.254661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.407 ms 00:48:45.424 [2024-11-20 06:00:05.254675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.254749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.254773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:48:45.424 [2024-11-20 06:00:05.254784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:48:45.424 [2024-11-20 06:00:05.254799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.254954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.254973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:48:45.424 [2024-11-20 06:00:05.254983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:48:45.424 [2024-11-20 06:00:05.254998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.255028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:48:45.424 [2024-11-20 06:00:05.262116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.262301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:48:45.424 [2024-11-20 06:00:05.262330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.105 ms 00:48:45.424 [2024-11-20 06:00:05.262342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.262396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.262408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:48:45.424 [2024-11-20 06:00:05.262421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:48:45.424 [2024-11-20 06:00:05.262432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.262493] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:48:45.424 [2024-11-20 06:00:05.262669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:48:45.424 [2024-11-20 06:00:05.262691] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:48:45.424 [2024-11-20 06:00:05.262706] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:48:45.424 [2024-11-20 06:00:05.262723] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:48:45.424 [2024-11-20 06:00:05.262735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:48:45.424 [2024-11-20 06:00:05.262749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:48:45.424 [2024-11-20 06:00:05.262759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:48:45.424 [2024-11-20 06:00:05.262775] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:48:45.424 [2024-11-20 06:00:05.262785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:48:45.424 [2024-11-20 06:00:05.262798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.262825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:48:45.424 [2024-11-20 06:00:05.262840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:48:45.424 [2024-11-20 06:00:05.262850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.262954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.424 [2024-11-20 06:00:05.262965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:48:45.424 [2024-11-20 06:00:05.262980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:48:45.424 [2024-11-20 06:00:05.263013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.424 [2024-11-20 06:00:05.263136] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:48:45.424 [2024-11-20 06:00:05.263147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:48:45.424 [2024-11-20 06:00:05.263160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:45.424 [2024-11-20 06:00:05.263169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:48:45.424 [2024-11-20 06:00:05.263190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:48:45.424 [2024-11-20 06:00:05.263210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:48:45.424 [2024-11-20 06:00:05.263220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:48:45.424 [2024-11-20 06:00:05.263227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:48:45.424 [2024-11-20 06:00:05.263245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:48:45.424 [2024-11-20 06:00:05.263255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:48:45.424 [2024-11-20 06:00:05.263274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:48:45.424 [2024-11-20 06:00:05.263281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:48:45.424 [2024-11-20 06:00:05.263301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:48:45.424 [2024-11-20 06:00:05.263312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.424 [2024-11-20 06:00:05.263319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:48:45.424 [2024-11-20 06:00:05.263330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:48:45.424 [2024-11-20 06:00:05.263338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:45.424 [2024-11-20 06:00:05.263348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:48:45.424 [2024-11-20 06:00:05.263355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:48:45.424 [2024-11-20 06:00:05.263366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:48:45.425 [2024-11-20 06:00:05.263383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:48:45.425 [2024-11-20 06:00:05.263390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:48:45.425 [2024-11-20 06:00:05.263408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:48:45.425 [2024-11-20 06:00:05.263417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:48:45.425 [2024-11-20 06:00:05.263438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:48:45.425 [2024-11-20 06:00:05.263445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:48:45.425 [2024-11-20 06:00:05.263462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:48:45.425 [2024-11-20 06:00:05.263489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:48:45.425 [2024-11-20 06:00:05.263516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:48:45.425 [2024-11-20 06:00:05.263525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263532] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:48:45.425 [2024-11-20 06:00:05.263543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:48:45.425 [2024-11-20 06:00:05.263552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:48:45.425 [2024-11-20 06:00:05.263574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:48:45.425 [2024-11-20 06:00:05.263587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:48:45.425 [2024-11-20 06:00:05.263595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:48:45.425 [2024-11-20 06:00:05.263607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:48:45.425 [2024-11-20 06:00:05.263614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:48:45.425 [2024-11-20 06:00:05.263625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:48:45.425 [2024-11-20 06:00:05.263639] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:48:45.425 [2024-11-20 06:00:05.263653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:48:45.425 [2024-11-20 06:00:05.263678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:48:45.425 [2024-11-20 06:00:05.263706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:48:45.425 [2024-11-20 06:00:05.263717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:48:45.425 [2024-11-20 06:00:05.263725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:48:45.425 [2024-11-20 06:00:05.263736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:48:45.425 [2024-11-20 06:00:05.263803] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:48:45.425 [2024-11-20 06:00:05.263834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:48:45.425 [2024-11-20 06:00:05.263854] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:48:45.425 [2024-11-20 06:00:05.263863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:48:45.425 [2024-11-20 06:00:05.263874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:48:45.425 [2024-11-20 06:00:05.263882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:45.425 [2024-11-20 06:00:05.263894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:48:45.425 [2024-11-20 06:00:05.263908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:48:45.425 [2024-11-20 06:00:05.263920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:45.425 [2024-11-20 06:00:05.263970] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:48:45.425 [2024-11-20 06:00:05.263987] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:48:50.693 [2024-11-20 06:00:09.752699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.693 [2024-11-20 06:00:09.752789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:48:50.693 [2024-11-20 06:00:09.752823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4497.382 ms 00:48:50.693 [2024-11-20 06:00:09.752839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.693 [2024-11-20 06:00:09.804151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.693 [2024-11-20 06:00:09.804231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:48:50.693 [2024-11-20 06:00:09.804248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.984 ms 00:48:50.693 [2024-11-20 06:00:09.804260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.693 [2024-11-20 06:00:09.804433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.693 [2024-11-20 06:00:09.804449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:48:50.693 [2024-11-20 06:00:09.804460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:48:50.693 [2024-11-20 06:00:09.804496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.693 [2024-11-20 06:00:09.862765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.693 [2024-11-20 06:00:09.862893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:48:50.693 [2024-11-20 06:00:09.862919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.313 ms 00:48:50.693 [2024-11-20 06:00:09.862938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.693 [2024-11-20 06:00:09.863032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.693 [2024-11-20 06:00:09.863059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:48:50.693 [2024-11-20 06:00:09.863076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:48:50.694 [2024-11-20 06:00:09.863093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.864183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.864237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:48:50.694 [2024-11-20 06:00:09.864253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.946 ms 00:48:50.694 [2024-11-20 06:00:09.864268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.864368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.864385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:48:50.694 [2024-11-20 06:00:09.864403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:48:50.694 [2024-11-20 06:00:09.864424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.893937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.894052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:48:50.694 [2024-11-20 06:00:09.894070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.534 ms 00:48:50.694 [2024-11-20 06:00:09.894085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.921531] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:48:50.694 [2024-11-20 06:00:09.923601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.923632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:48:50.694 [2024-11-20 06:00:09.923661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.415 ms 00:48:50.694 [2024-11-20 06:00:09.923672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.972190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.972273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:48:50.694 [2024-11-20 06:00:09.972295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.535 ms 00:48:50.694 [2024-11-20 06:00:09.972305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:09.972409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:09.972425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:48:50.694 [2024-11-20 06:00:09.972442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:48:50.694 [2024-11-20 06:00:09.972450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.015535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.015724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:48:50.694 [2024-11-20 06:00:10.015772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.069 ms 00:48:50.694 [2024-11-20 06:00:10.015800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.055429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.055594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:48:50.694 [2024-11-20 06:00:10.055635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.604 ms 00:48:50.694 [2024-11-20 06:00:10.055659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.056528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.056585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:48:50.694 [2024-11-20 06:00:10.056624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.821 ms 00:48:50.694 [2024-11-20 06:00:10.056670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.229862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.230062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:48:50.694 [2024-11-20 06:00:10.230095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 173.417 ms 00:48:50.694 [2024-11-20 06:00:10.230106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.274672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.274759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:48:50.694 [2024-11-20 06:00:10.274796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.487 ms 00:48:50.694 [2024-11-20 06:00:10.274826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.317615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.317703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:48:50.694 [2024-11-20 06:00:10.317722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.787 ms 00:48:50.694 [2024-11-20 06:00:10.317731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.356461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.356533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:48:50.694 [2024-11-20 06:00:10.356552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.689 ms 00:48:50.694 [2024-11-20 06:00:10.356560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.356618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.356629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:48:50.694 [2024-11-20 06:00:10.356645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:48:50.694 [2024-11-20 06:00:10.356653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.356772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:48:50.694 [2024-11-20 06:00:10.356782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:48:50.694 [2024-11-20 06:00:10.356798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:48:50.694 [2024-11-20 06:00:10.356822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:48:50.694 [2024-11-20 06:00:10.358385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5132.104 ms, result 0 00:48:50.694 { 00:48:50.694 "name": "ftl", 00:48:50.694 "uuid": "04afc1a3-e019-45a2-a482-026a8c70763e" 00:48:50.694 } 00:48:50.694 06:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:48:50.952 [2024-11-20 06:00:10.616657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:50.952 06:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:48:51.210 06:00:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:48:51.469 [2024-11-20 06:00:11.144239] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:48:51.469 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:48:51.727 [2024-11-20 06:00:11.411950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:48:51.727 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:48:51.985 Fill FTL, iteration 1 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81570 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81570 /var/tmp/spdk.tgt.sock 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81570 ']' 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:48:51.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:48:51.985 06:00:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:48:52.244 [2024-11-20 06:00:11.947254] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:52.244 [2024-11-20 06:00:11.947495] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81570 ] 00:48:52.244 [2024-11-20 06:00:12.126543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:52.502 [2024-11-20 06:00:12.262228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:53.441 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:48:53.441 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:48:53.441 06:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:48:54.009 ftln1 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81570 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81570 ']' 00:48:54.009 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81570 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81570 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81570' 00:48:54.267 killing process with pid 81570 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81570 00:48:54.267 06:00:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81570 00:48:56.799 06:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:48:56.799 06:00:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:48:57.058 [2024-11-20 06:00:16.780296] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:48:57.058 [2024-11-20 06:00:16.780948] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81626 ] 00:48:57.058 [2024-11-20 06:00:16.957142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:57.317 [2024-11-20 06:00:17.086090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:48:58.696  [2024-11-20T06:00:19.993Z] Copying: 227/1024 [MB] (227 MBps) [2024-11-20T06:00:20.583Z] Copying: 446/1024 [MB] (219 MBps) [2024-11-20T06:00:21.958Z] Copying: 674/1024 [MB] (228 MBps) [2024-11-20T06:00:22.216Z] Copying: 898/1024 [MB] (224 MBps) [2024-11-20T06:00:23.592Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:49:03.673 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:49:03.673 Calculate MD5 checksum, iteration 1 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:03.673 06:00:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:03.931 [2024-11-20 06:00:23.616628] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:03.931 [2024-11-20 06:00:23.616873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81701 ] 00:49:03.931 [2024-11-20 06:00:23.795752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:04.190 [2024-11-20 06:00:23.979105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:06.094  [2024-11-20T06:00:26.580Z] Copying: 567/1024 [MB] (567 MBps) [2024-11-20T06:00:27.517Z] Copying: 1024/1024 [MB] (average 544 MBps) 00:49:07.598 00:49:07.857 06:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:49:07.857 06:00:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b0d863325d4ea4601719fe3d89189968 00:49:09.764 Fill FTL, iteration 2 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:09.764 06:00:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:49:10.022 [2024-11-20 06:00:29.701266] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:10.022 [2024-11-20 06:00:29.701527] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81769 ] 00:49:10.022 [2024-11-20 06:00:29.869764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:10.281 [2024-11-20 06:00:29.990734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:11.655  [2024-11-20T06:00:32.509Z] Copying: 206/1024 [MB] (206 MBps) [2024-11-20T06:00:33.885Z] Copying: 405/1024 [MB] (199 MBps) [2024-11-20T06:00:34.826Z] Copying: 586/1024 [MB] (181 MBps) [2024-11-20T06:00:35.770Z] Copying: 795/1024 [MB] (209 MBps) [2024-11-20T06:00:35.770Z] Copying: 989/1024 [MB] (194 MBps) [2024-11-20T06:00:37.147Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:49:17.228 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:49:17.486 Calculate MD5 checksum, iteration 2 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:17.486 06:00:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:17.486 [2024-11-20 06:00:37.256700] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:17.486 [2024-11-20 06:00:37.257034] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81840 ] 00:49:17.746 [2024-11-20 06:00:37.447051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:17.746 [2024-11-20 06:00:37.613546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:19.646  [2024-11-20T06:00:40.501Z] Copying: 569/1024 [MB] (569 MBps) [2024-11-20T06:00:40.501Z] Copying: 1014/1024 [MB] (445 MBps) [2024-11-20T06:00:42.420Z] Copying: 1024/1024 [MB] (average 507 MBps) 00:49:22.501 00:49:22.501 06:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:49:22.501 06:00:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:24.413 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:49:24.413 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ec660c0ae414191e40a524d624c262d8 00:49:24.413 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:49:24.413 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:49:24.413 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:24.672 [2024-11-20 06:00:44.553898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:24.672 [2024-11-20 06:00:44.553983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:24.672 [2024-11-20 06:00:44.554002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:49:24.672 [2024-11-20 06:00:44.554013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:24.672 [2024-11-20 06:00:44.554054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:24.672 [2024-11-20 06:00:44.554066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:24.672 [2024-11-20 06:00:44.554082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:49:24.672 [2024-11-20 06:00:44.554092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:24.672 [2024-11-20 06:00:44.554117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:24.672 [2024-11-20 06:00:44.554129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:24.672 [2024-11-20 06:00:44.554139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:24.672 [2024-11-20 06:00:44.554149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:24.672 [2024-11-20 06:00:44.554232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.372 ms, result 0 00:49:24.672 true 00:49:24.672 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:24.933 { 00:49:24.933 "name": "ftl", 00:49:24.933 "properties": [ 00:49:24.933 { 00:49:24.933 "name": "superblock_version", 00:49:24.933 "value": 5, 00:49:24.933 "read-only": true 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "name": "base_device", 00:49:24.933 "bands": [ 00:49:24.933 { 00:49:24.933 "id": 0, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 1, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 2, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 3, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 4, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 5, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 6, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 7, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 8, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 9, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 10, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 11, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 12, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 13, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 14, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 15, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 16, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 17, 00:49:24.933 "state": "FREE", 00:49:24.933 "validity": 0.0 00:49:24.933 } 00:49:24.933 ], 00:49:24.933 "read-only": true 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "name": "cache_device", 00:49:24.933 "type": "bdev", 00:49:24.933 "chunks": [ 00:49:24.933 { 00:49:24.933 "id": 0, 00:49:24.933 "state": "INACTIVE", 00:49:24.933 "utilization": 0.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 1, 00:49:24.933 "state": "CLOSED", 00:49:24.933 "utilization": 1.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 2, 00:49:24.933 "state": "CLOSED", 00:49:24.933 "utilization": 1.0 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 3, 00:49:24.933 "state": "OPEN", 00:49:24.933 "utilization": 0.001953125 00:49:24.933 }, 00:49:24.933 { 00:49:24.933 "id": 4, 00:49:24.933 "state": "OPEN", 00:49:24.933 "utilization": 0.0 00:49:24.933 } 00:49:24.933 ], 00:49:24.933 "read-only": true 00:49:24.933 }, 00:49:24.934 { 00:49:24.934 "name": "verbose_mode", 00:49:24.934 "value": true, 00:49:24.934 "unit": "", 00:49:24.934 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:24.934 }, 00:49:24.934 { 00:49:24.934 "name": "prep_upgrade_on_shutdown", 00:49:24.934 "value": false, 00:49:24.934 "unit": "", 00:49:24.934 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:24.934 } 00:49:24.934 ] 00:49:24.934 } 00:49:25.194 06:00:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:49:25.194 [2024-11-20 06:00:45.077907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.194 [2024-11-20 06:00:45.077976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:25.194 [2024-11-20 06:00:45.077992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:49:25.194 [2024-11-20 06:00:45.078002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.194 [2024-11-20 06:00:45.078035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.194 [2024-11-20 06:00:45.078046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:25.194 [2024-11-20 06:00:45.078056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:25.194 [2024-11-20 06:00:45.078064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.194 [2024-11-20 06:00:45.078086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.194 [2024-11-20 06:00:45.078096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:25.194 [2024-11-20 06:00:45.078105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:49:25.194 [2024-11-20 06:00:45.078113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.194 [2024-11-20 06:00:45.078190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.295 ms, result 0 00:49:25.194 true 00:49:25.194 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:49:25.194 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:25.194 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:49:25.766 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:49:25.766 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:49:25.766 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:25.766 [2024-11-20 06:00:45.669890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.766 [2024-11-20 06:00:45.669963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:25.766 [2024-11-20 06:00:45.669980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:49:25.766 [2024-11-20 06:00:45.669990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.766 [2024-11-20 06:00:45.670026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.766 [2024-11-20 06:00:45.670038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:25.766 [2024-11-20 06:00:45.670048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:25.766 [2024-11-20 06:00:45.670064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.766 [2024-11-20 06:00:45.670087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:25.766 [2024-11-20 06:00:45.670097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:25.767 [2024-11-20 06:00:45.670106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:25.767 [2024-11-20 06:00:45.670116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:25.767 [2024-11-20 06:00:45.670189] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.305 ms, result 0 00:49:25.767 true 00:49:26.043 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:26.349 { 00:49:26.349 "name": "ftl", 00:49:26.349 "properties": [ 00:49:26.349 { 00:49:26.349 "name": "superblock_version", 00:49:26.349 "value": 5, 00:49:26.349 "read-only": true 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "name": "base_device", 00:49:26.349 "bands": [ 00:49:26.349 { 00:49:26.349 "id": 0, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 1, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 2, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 3, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 4, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 5, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 6, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 7, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 8, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 9, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 10, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 11, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 12, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 13, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 14, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 15, 00:49:26.349 "state": "FREE", 00:49:26.349 "validity": 0.0 00:49:26.349 }, 00:49:26.349 { 00:49:26.349 "id": 16, 00:49:26.350 "state": "FREE", 00:49:26.350 "validity": 0.0 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "id": 17, 00:49:26.350 "state": "FREE", 00:49:26.350 "validity": 0.0 00:49:26.350 } 00:49:26.350 ], 00:49:26.350 "read-only": true 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "name": "cache_device", 00:49:26.350 "type": "bdev", 00:49:26.350 "chunks": [ 00:49:26.350 { 00:49:26.350 "id": 0, 00:49:26.350 "state": "INACTIVE", 00:49:26.350 "utilization": 0.0 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "id": 1, 00:49:26.350 "state": "CLOSED", 00:49:26.350 "utilization": 1.0 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "id": 2, 00:49:26.350 "state": "CLOSED", 00:49:26.350 "utilization": 1.0 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "id": 3, 00:49:26.350 "state": "OPEN", 00:49:26.350 "utilization": 0.001953125 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "id": 4, 00:49:26.350 "state": "OPEN", 00:49:26.350 "utilization": 0.0 00:49:26.350 } 00:49:26.350 ], 00:49:26.350 "read-only": true 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "name": "verbose_mode", 00:49:26.350 "value": true, 00:49:26.350 "unit": "", 00:49:26.350 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:26.350 }, 00:49:26.350 { 00:49:26.350 "name": "prep_upgrade_on_shutdown", 00:49:26.350 "value": true, 00:49:26.350 "unit": "", 00:49:26.350 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:26.350 } 00:49:26.350 ] 00:49:26.350 } 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81416 ]] 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81416 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81416 ']' 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81416 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:49:26.350 06:00:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81416 00:49:26.350 killing process with pid 81416 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81416' 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81416 00:49:26.350 06:00:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81416 00:49:27.729 [2024-11-20 06:00:47.522756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:49:27.729 [2024-11-20 06:00:47.547388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:27.729 [2024-11-20 06:00:47.547472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:49:27.729 [2024-11-20 06:00:47.547491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:49:27.729 [2024-11-20 06:00:47.547502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:27.729 [2024-11-20 06:00:47.547531] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:49:27.729 [2024-11-20 06:00:47.553162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:27.729 [2024-11-20 06:00:47.553233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:49:27.729 [2024-11-20 06:00:47.553250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.618 ms 00:49:27.729 [2024-11-20 06:00:47.553262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.790252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.790414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:49:37.719 [2024-11-20 06:00:55.790435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8252.819 ms 00:49:37.719 [2024-11-20 06:00:55.790445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.791671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.791704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:49:37.719 [2024-11-20 06:00:55.791717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:49:37.719 [2024-11-20 06:00:55.791726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.792798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.792835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:49:37.719 [2024-11-20 06:00:55.792857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.042 ms 00:49:37.719 [2024-11-20 06:00:55.792866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.810555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.810614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:49:37.719 [2024-11-20 06:00:55.810627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.671 ms 00:49:37.719 [2024-11-20 06:00:55.810652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.820731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.820783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:49:37.719 [2024-11-20 06:00:55.820796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.053 ms 00:49:37.719 [2024-11-20 06:00:55.820816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.820927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.820939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:49:37.719 [2024-11-20 06:00:55.820949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:49:37.719 [2024-11-20 06:00:55.820965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.836109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.836229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:49:37.719 [2024-11-20 06:00:55.836246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.154 ms 00:49:37.719 [2024-11-20 06:00:55.836255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.851537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.851637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:49:37.719 [2024-11-20 06:00:55.851652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.271 ms 00:49:37.719 [2024-11-20 06:00:55.851661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.867169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.867220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:49:37.719 [2024-11-20 06:00:55.867233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.498 ms 00:49:37.719 [2024-11-20 06:00:55.867241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.882664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.882708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:49:37.719 [2024-11-20 06:00:55.882721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.361 ms 00:49:37.719 [2024-11-20 06:00:55.882729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.882764] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:49:37.719 [2024-11-20 06:00:55.882782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:49:37.719 [2024-11-20 06:00:55.882794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:49:37.719 [2024-11-20 06:00:55.882838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:49:37.719 [2024-11-20 06:00:55.882849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:49:37.719 [2024-11-20 06:00:55.882977] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:49:37.719 [2024-11-20 06:00:55.882986] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04afc1a3-e019-45a2-a482-026a8c70763e 00:49:37.719 [2024-11-20 06:00:55.882995] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:49:37.719 [2024-11-20 06:00:55.883003] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:49:37.719 [2024-11-20 06:00:55.883010] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:49:37.719 [2024-11-20 06:00:55.883019] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:49:37.719 [2024-11-20 06:00:55.883027] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:49:37.719 [2024-11-20 06:00:55.883036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:49:37.719 [2024-11-20 06:00:55.883050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:49:37.719 [2024-11-20 06:00:55.883056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:49:37.719 [2024-11-20 06:00:55.883063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:49:37.719 [2024-11-20 06:00:55.883073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.883082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:49:37.719 [2024-11-20 06:00:55.883106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:49:37.719 [2024-11-20 06:00:55.883114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.905697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.905751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:49:37.719 [2024-11-20 06:00:55.905764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.591 ms 00:49:37.719 [2024-11-20 06:00:55.905773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.906460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:37.719 [2024-11-20 06:00:55.906476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:49:37.719 [2024-11-20 06:00:55.906486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:49:37.719 [2024-11-20 06:00:55.906494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.978100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:55.978170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:49:37.719 [2024-11-20 06:00:55.978186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:55.978201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.978265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:55.978274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:49:37.719 [2024-11-20 06:00:55.978283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:55.978292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.978425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:55.978438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:49:37.719 [2024-11-20 06:00:55.978447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:55.978455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:55.978479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:55.978488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:49:37.719 [2024-11-20 06:00:55.978496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:55.978505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:56.121142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:56.121229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:49:37.719 [2024-11-20 06:00:56.121246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:56.121255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.719 [2024-11-20 06:00:56.233180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.719 [2024-11-20 06:00:56.233268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:49:37.719 [2024-11-20 06:00:56.233283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.719 [2024-11-20 06:00:56.233292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:49:37.720 [2024-11-20 06:00:56.233445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:49:37.720 [2024-11-20 06:00:56.233539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:49:37.720 [2024-11-20 06:00:56.233687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:49:37.720 [2024-11-20 06:00:56.233754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:49:37.720 [2024-11-20 06:00:56.233879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.233942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:49:37.720 [2024-11-20 06:00:56.233957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:49:37.720 [2024-11-20 06:00:56.233965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:49:37.720 [2024-11-20 06:00:56.233974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:37.720 [2024-11-20 06:00:56.234119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8703.462 ms, result 0 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82093 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82093 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 82093 ']' 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:49:41.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:49:41.003 06:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:49:41.261 [2024-11-20 06:01:00.983086] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:41.262 [2024-11-20 06:01:00.983349] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82093 ] 00:49:41.262 [2024-11-20 06:01:01.167567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:41.520 [2024-11-20 06:01:01.318097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:49:42.909 [2024-11-20 06:01:02.509299] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:42.909 [2024-11-20 06:01:02.509545] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:49:42.909 [2024-11-20 06:01:02.657659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.657896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:49:42.909 [2024-11-20 06:01:02.657943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:49:42.909 [2024-11-20 06:01:02.657958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.658087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.658104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:49:42.909 [2024-11-20 06:01:02.658116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.099 ms 00:49:42.909 [2024-11-20 06:01:02.658127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.658161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:49:42.909 [2024-11-20 06:01:02.659441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:49:42.909 [2024-11-20 06:01:02.659471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.659480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:49:42.909 [2024-11-20 06:01:02.659489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.321 ms 00:49:42.909 [2024-11-20 06:01:02.659498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.662179] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:49:42.909 [2024-11-20 06:01:02.686288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.686384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:49:42.909 [2024-11-20 06:01:02.686433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.150 ms 00:49:42.909 [2024-11-20 06:01:02.686443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.686605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.686620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:49:42.909 [2024-11-20 06:01:02.686630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:49:42.909 [2024-11-20 06:01:02.686640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.701146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.701199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:49:42.909 [2024-11-20 06:01:02.701213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.378 ms 00:49:42.909 [2024-11-20 06:01:02.701238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.701342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.701363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:49:42.909 [2024-11-20 06:01:02.701373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:49:42.909 [2024-11-20 06:01:02.701381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.701478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.701488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:49:42.909 [2024-11-20 06:01:02.701502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:49:42.909 [2024-11-20 06:01:02.701510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.701549] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:49:42.909 [2024-11-20 06:01:02.707956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.708019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:49:42.909 [2024-11-20 06:01:02.708033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.427 ms 00:49:42.909 [2024-11-20 06:01:02.708047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.708098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.708108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:49:42.909 [2024-11-20 06:01:02.708118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:49:42.909 [2024-11-20 06:01:02.708126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.708190] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:49:42.909 [2024-11-20 06:01:02.708218] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:49:42.909 [2024-11-20 06:01:02.708260] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:49:42.909 [2024-11-20 06:01:02.708276] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:49:42.909 [2024-11-20 06:01:02.708372] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:49:42.909 [2024-11-20 06:01:02.708383] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:49:42.909 [2024-11-20 06:01:02.708394] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:49:42.909 [2024-11-20 06:01:02.708404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:49:42.909 [2024-11-20 06:01:02.708414] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:49:42.909 [2024-11-20 06:01:02.708427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:49:42.909 [2024-11-20 06:01:02.708435] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:49:42.909 [2024-11-20 06:01:02.708444] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:49:42.909 [2024-11-20 06:01:02.708452] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:49:42.909 [2024-11-20 06:01:02.708460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.708469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:49:42.909 [2024-11-20 06:01:02.708479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:49:42.909 [2024-11-20 06:01:02.708487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.708564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.909 [2024-11-20 06:01:02.708572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:49:42.909 [2024-11-20 06:01:02.708581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:49:42.909 [2024-11-20 06:01:02.708595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.909 [2024-11-20 06:01:02.708697] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:49:42.909 [2024-11-20 06:01:02.708709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:49:42.909 [2024-11-20 06:01:02.708718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:42.909 [2024-11-20 06:01:02.708726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.909 [2024-11-20 06:01:02.708735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:49:42.910 [2024-11-20 06:01:02.708742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.708749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:49:42.910 [2024-11-20 06:01:02.708756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:49:42.910 [2024-11-20 06:01:02.708765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:49:42.910 [2024-11-20 06:01:02.708773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.708781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:49:42.910 [2024-11-20 06:01:02.708788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:49:42.910 [2024-11-20 06:01:02.708795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.708821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:49:42.910 [2024-11-20 06:01:02.708830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:49:42.910 [2024-11-20 06:01:02.708837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.708844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:49:42.910 [2024-11-20 06:01:02.708851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:49:42.910 [2024-11-20 06:01:02.708857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.708865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:49:42.910 [2024-11-20 06:01:02.708873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:49:42.910 [2024-11-20 06:01:02.708880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:42.910 [2024-11-20 06:01:02.708887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:49:42.910 [2024-11-20 06:01:02.708895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:49:42.910 [2024-11-20 06:01:02.708902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:42.910 [2024-11-20 06:01:02.708934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:49:42.910 [2024-11-20 06:01:02.708943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:49:42.910 [2024-11-20 06:01:02.708950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:42.910 [2024-11-20 06:01:02.708957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:49:42.910 [2024-11-20 06:01:02.708965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:49:42.910 [2024-11-20 06:01:02.708972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:49:42.910 [2024-11-20 06:01:02.708979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:49:42.910 [2024-11-20 06:01:02.708986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:49:42.910 [2024-11-20 06:01:02.708993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:49:42.910 [2024-11-20 06:01:02.709008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:49:42.910 [2024-11-20 06:01:02.709016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:49:42.910 [2024-11-20 06:01:02.709030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:49:42.910 [2024-11-20 06:01:02.709052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:49:42.910 [2024-11-20 06:01:02.709059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709067] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:49:42.910 [2024-11-20 06:01:02.709077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:49:42.910 [2024-11-20 06:01:02.709084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:49:42.910 [2024-11-20 06:01:02.709092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:49:42.910 [2024-11-20 06:01:02.709107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:49:42.910 [2024-11-20 06:01:02.709114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:49:42.910 [2024-11-20 06:01:02.709122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:49:42.910 [2024-11-20 06:01:02.709129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:49:42.910 [2024-11-20 06:01:02.709136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:49:42.910 [2024-11-20 06:01:02.709143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:49:42.910 [2024-11-20 06:01:02.709153] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:49:42.910 [2024-11-20 06:01:02.709163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:49:42.910 [2024-11-20 06:01:02.709181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:49:42.910 [2024-11-20 06:01:02.709203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:49:42.910 [2024-11-20 06:01:02.709211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:49:42.910 [2024-11-20 06:01:02.709218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:49:42.910 [2024-11-20 06:01:02.709226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:49:42.910 [2024-11-20 06:01:02.709282] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:49:42.910 [2024-11-20 06:01:02.709290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:49:42.910 [2024-11-20 06:01:02.709306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:49:42.910 [2024-11-20 06:01:02.709314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:49:42.910 [2024-11-20 06:01:02.709339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:49:42.910 [2024-11-20 06:01:02.709350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:42.910 [2024-11-20 06:01:02.709359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:49:42.910 [2024-11-20 06:01:02.709368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:49:42.910 [2024-11-20 06:01:02.709377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:42.910 [2024-11-20 06:01:02.709447] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:49:42.910 [2024-11-20 06:01:02.709466] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:49:46.196 [2024-11-20 06:01:05.910908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:05.911012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:49:46.196 [2024-11-20 06:01:05.911033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3207.634 ms 00:49:46.196 [2024-11-20 06:01:05.911043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:05.963693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:05.963773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:49:46.196 [2024-11-20 06:01:05.963790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.283 ms 00:49:46.196 [2024-11-20 06:01:05.963799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:05.963989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:05.964007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:49:46.196 [2024-11-20 06:01:05.964017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:49:46.196 [2024-11-20 06:01:05.964026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.026220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.026407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:49:46.196 [2024-11-20 06:01:06.026431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.244 ms 00:49:46.196 [2024-11-20 06:01:06.026448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.026539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.026551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:49:46.196 [2024-11-20 06:01:06.026563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:49:46.196 [2024-11-20 06:01:06.026574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.027467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.027489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:49:46.196 [2024-11-20 06:01:06.027501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.809 ms 00:49:46.196 [2024-11-20 06:01:06.027510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.027571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.027582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:49:46.196 [2024-11-20 06:01:06.027591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:49:46.196 [2024-11-20 06:01:06.027600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.054579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.054642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:49:46.196 [2024-11-20 06:01:06.054659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.004 ms 00:49:46.196 [2024-11-20 06:01:06.054670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.090531] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:49:46.196 [2024-11-20 06:01:06.090602] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:49:46.196 [2024-11-20 06:01:06.090619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.090629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:49:46.196 [2024-11-20 06:01:06.090643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.793 ms 00:49:46.196 [2024-11-20 06:01:06.090651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.196 [2024-11-20 06:01:06.113271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.196 [2024-11-20 06:01:06.113432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:49:46.196 [2024-11-20 06:01:06.113454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.576 ms 00:49:46.196 [2024-11-20 06:01:06.113465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.455 [2024-11-20 06:01:06.135570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.455 [2024-11-20 06:01:06.135646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:49:46.455 [2024-11-20 06:01:06.135661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.056 ms 00:49:46.455 [2024-11-20 06:01:06.135687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.455 [2024-11-20 06:01:06.159326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.455 [2024-11-20 06:01:06.159408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:49:46.455 [2024-11-20 06:01:06.159425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.595 ms 00:49:46.455 [2024-11-20 06:01:06.159435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.160538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.160577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:49:46.456 [2024-11-20 06:01:06.160590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.884 ms 00:49:46.456 [2024-11-20 06:01:06.160599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.271935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.272026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:49:46.456 [2024-11-20 06:01:06.272043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 111.505 ms 00:49:46.456 [2024-11-20 06:01:06.272052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.290195] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:49:46.456 [2024-11-20 06:01:06.292152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.292185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:49:46.456 [2024-11-20 06:01:06.292213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.018 ms 00:49:46.456 [2024-11-20 06:01:06.292222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.292366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.292383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:49:46.456 [2024-11-20 06:01:06.292393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:49:46.456 [2024-11-20 06:01:06.292402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.292487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.292498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:49:46.456 [2024-11-20 06:01:06.292507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:49:46.456 [2024-11-20 06:01:06.292516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.292545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.292555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:49:46.456 [2024-11-20 06:01:06.292569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:49:46.456 [2024-11-20 06:01:06.292593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.292633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:49:46.456 [2024-11-20 06:01:06.292646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.292656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:49:46.456 [2024-11-20 06:01:06.292665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:49:46.456 [2024-11-20 06:01:06.292674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.335437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.335558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:49:46.456 [2024-11-20 06:01:06.335577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.810 ms 00:49:46.456 [2024-11-20 06:01:06.335586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.335750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:46.456 [2024-11-20 06:01:06.335764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:49:46.456 [2024-11-20 06:01:06.335773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:49:46.456 [2024-11-20 06:01:06.335782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:46.456 [2024-11-20 06:01:06.337592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3686.441 ms, result 0 00:49:46.456 [2024-11-20 06:01:06.351950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:46.456 [2024-11-20 06:01:06.368037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:49:46.715 [2024-11-20 06:01:06.379302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:49:47.283 06:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:49:47.283 06:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:49:47.283 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:49:47.283 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:49:47.283 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:49:47.543 [2024-11-20 06:01:07.370117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:47.543 [2024-11-20 06:01:07.370184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:49:47.543 [2024-11-20 06:01:07.370200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:49:47.543 [2024-11-20 06:01:07.370216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:47.543 [2024-11-20 06:01:07.370248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:47.543 [2024-11-20 06:01:07.370259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:49:47.543 [2024-11-20 06:01:07.370269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:47.543 [2024-11-20 06:01:07.370279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:47.543 [2024-11-20 06:01:07.370300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:49:47.543 [2024-11-20 06:01:07.370311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:49:47.543 [2024-11-20 06:01:07.370320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:49:47.543 [2024-11-20 06:01:07.370329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:49:47.543 [2024-11-20 06:01:07.370406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.303 ms, result 0 00:49:47.543 true 00:49:47.543 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:47.803 { 00:49:47.803 "name": "ftl", 00:49:47.803 "properties": [ 00:49:47.803 { 00:49:47.803 "name": "superblock_version", 00:49:47.803 "value": 5, 00:49:47.803 "read-only": true 00:49:47.803 }, 00:49:47.803 { 00:49:47.803 "name": "base_device", 00:49:47.803 "bands": [ 00:49:47.803 { 00:49:47.803 "id": 0, 00:49:47.803 "state": "CLOSED", 00:49:47.803 "validity": 1.0 00:49:47.803 }, 00:49:47.803 { 00:49:47.803 "id": 1, 00:49:47.803 "state": "CLOSED", 00:49:47.803 "validity": 1.0 00:49:47.803 }, 00:49:47.803 { 00:49:47.803 "id": 2, 00:49:47.803 "state": "CLOSED", 00:49:47.803 "validity": 0.007843137254901933 00:49:47.803 }, 00:49:47.803 { 00:49:47.803 "id": 3, 00:49:47.803 "state": "FREE", 00:49:47.803 "validity": 0.0 00:49:47.803 }, 00:49:47.803 { 00:49:47.804 "id": 4, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 5, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 6, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 7, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 8, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 9, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 10, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 11, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 12, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 13, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 14, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 15, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 16, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 17, 00:49:47.804 "state": "FREE", 00:49:47.804 "validity": 0.0 00:49:47.804 } 00:49:47.804 ], 00:49:47.804 "read-only": true 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "name": "cache_device", 00:49:47.804 "type": "bdev", 00:49:47.804 "chunks": [ 00:49:47.804 { 00:49:47.804 "id": 0, 00:49:47.804 "state": "INACTIVE", 00:49:47.804 "utilization": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 1, 00:49:47.804 "state": "OPEN", 00:49:47.804 "utilization": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 2, 00:49:47.804 "state": "OPEN", 00:49:47.804 "utilization": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 3, 00:49:47.804 "state": "FREE", 00:49:47.804 "utilization": 0.0 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "id": 4, 00:49:47.804 "state": "FREE", 00:49:47.804 "utilization": 0.0 00:49:47.804 } 00:49:47.804 ], 00:49:47.804 "read-only": true 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "name": "verbose_mode", 00:49:47.804 "value": true, 00:49:47.804 "unit": "", 00:49:47.804 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:49:47.804 }, 00:49:47.804 { 00:49:47.804 "name": "prep_upgrade_on_shutdown", 00:49:47.804 "value": false, 00:49:47.804 "unit": "", 00:49:47.804 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:49:47.804 } 00:49:47.804 ] 00:49:47.804 } 00:49:47.804 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:49:47.804 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:47.804 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:49:48.063 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:49:48.063 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:49:48.063 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:49:48.063 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:49:48.063 06:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:49:48.323 Validate MD5 checksum, iteration 1 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:48.323 06:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:49:48.583 [2024-11-20 06:01:08.264371] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:48.583 [2024-11-20 06:01:08.264665] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82183 ] 00:49:48.583 [2024-11-20 06:01:08.449395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:48.843 [2024-11-20 06:01:08.597639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:50.747  [2024-11-20T06:01:11.233Z] Copying: 571/1024 [MB] (571 MBps) [2024-11-20T06:01:13.136Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:49:53.217 00:49:53.217 06:01:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:49:53.217 06:01:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0d863325d4ea4601719fe3d89189968 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0d863325d4ea4601719fe3d89189968 != \b\0\d\8\6\3\3\2\5\d\4\e\a\4\6\0\1\7\1\9\f\e\3\d\8\9\1\8\9\9\6\8 ]] 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:49:55.117 Validate MD5 checksum, iteration 2 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:49:55.117 06:01:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:49:55.376 [2024-11-20 06:01:15.120895] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:49:55.376 [2024-11-20 06:01:15.121161] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82257 ] 00:49:55.635 [2024-11-20 06:01:15.306453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:55.635 [2024-11-20 06:01:15.467710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:49:57.542  [2024-11-20T06:01:18.399Z] Copying: 569/1024 [MB] (569 MBps) [2024-11-20T06:01:19.774Z] Copying: 1024/1024 [MB] (average 547 MBps) 00:49:59.855 00:49:59.855 06:01:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:49:59.855 06:01:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ec660c0ae414191e40a524d624c262d8 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ec660c0ae414191e40a524d624c262d8 != \e\c\6\6\0\c\0\a\e\4\1\4\1\9\1\e\4\0\a\5\2\4\d\6\2\4\c\2\6\2\d\8 ]] 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 82093 ]] 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 82093 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82334 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82334 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 82334 ']' 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:02.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:50:02.419 06:01:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:02.419 [2024-11-20 06:01:21.979946] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:50:02.419 [2024-11-20 06:01:21.980113] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82334 ] 00:50:02.419 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 82093 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:50:02.419 [2024-11-20 06:01:22.166809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:02.419 [2024-11-20 06:01:22.314170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:03.798 [2024-11-20 06:01:23.530611] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:50:03.798 [2024-11-20 06:01:23.530731] bdev.c:8470:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:50:03.798 [2024-11-20 06:01:23.680581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:03.798 [2024-11-20 06:01:23.680678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:50:03.798 [2024-11-20 06:01:23.680697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:50:03.798 [2024-11-20 06:01:23.680707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:03.798 [2024-11-20 06:01:23.680829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:03.798 [2024-11-20 06:01:23.680846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:50:03.798 [2024-11-20 06:01:23.680857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:50:03.798 [2024-11-20 06:01:23.680866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:03.799 [2024-11-20 06:01:23.680897] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:50:03.799 [2024-11-20 06:01:23.682282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:50:03.799 [2024-11-20 06:01:23.682414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:03.799 [2024-11-20 06:01:23.682427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:50:03.799 [2024-11-20 06:01:23.682439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.526 ms 00:50:03.799 [2024-11-20 06:01:23.682448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:03.799 [2024-11-20 06:01:23.682971] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:50:03.799 [2024-11-20 06:01:23.716511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:03.799 [2024-11-20 06:01:23.716602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:50:03.799 [2024-11-20 06:01:23.716622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.601 ms 00:50:03.799 [2024-11-20 06:01:23.716632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.059 [2024-11-20 06:01:23.736030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.059 [2024-11-20 06:01:23.736104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:50:04.059 [2024-11-20 06:01:23.736144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:50:04.059 [2024-11-20 06:01:23.736154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.736647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.736667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:50:04.060 [2024-11-20 06:01:23.736678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:50:04.060 [2024-11-20 06:01:23.736687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.736770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.736785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:50:04.060 [2024-11-20 06:01:23.736796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:50:04.060 [2024-11-20 06:01:23.736805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.736954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.737003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:50:04.060 [2024-11-20 06:01:23.737051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:50:04.060 [2024-11-20 06:01:23.737090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.737159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:50:04.060 [2024-11-20 06:01:23.743922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.744022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:50:04.060 [2024-11-20 06:01:23.744040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.787 ms 00:50:04.060 [2024-11-20 06:01:23.744050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.744109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.744120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:50:04.060 [2024-11-20 06:01:23.744131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:50:04.060 [2024-11-20 06:01:23.744141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.744194] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:50:04.060 [2024-11-20 06:01:23.744224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:50:04.060 [2024-11-20 06:01:23.744267] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:50:04.060 [2024-11-20 06:01:23.744289] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:50:04.060 [2024-11-20 06:01:23.744401] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:50:04.060 [2024-11-20 06:01:23.744415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:50:04.060 [2024-11-20 06:01:23.744428] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:50:04.060 [2024-11-20 06:01:23.744440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:50:04.060 [2024-11-20 06:01:23.744452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:50:04.060 [2024-11-20 06:01:23.744462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:50:04.060 [2024-11-20 06:01:23.744471] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:50:04.060 [2024-11-20 06:01:23.744480] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:50:04.060 [2024-11-20 06:01:23.744489] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:50:04.060 [2024-11-20 06:01:23.744499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.744512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:50:04.060 [2024-11-20 06:01:23.744522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:50:04.060 [2024-11-20 06:01:23.744531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.744622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.060 [2024-11-20 06:01:23.744633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:50:04.060 [2024-11-20 06:01:23.744643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:50:04.060 [2024-11-20 06:01:23.744652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.060 [2024-11-20 06:01:23.744768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:50:04.060 [2024-11-20 06:01:23.744781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:50:04.060 [2024-11-20 06:01:23.744795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:50:04.060 [2024-11-20 06:01:23.744805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:50:04.060 [2024-11-20 06:01:23.744836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:50:04.060 [2024-11-20 06:01:23.744855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:50:04.060 [2024-11-20 06:01:23.744865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:50:04.060 [2024-11-20 06:01:23.744873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:50:04.060 [2024-11-20 06:01:23.744893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:50:04.060 [2024-11-20 06:01:23.744902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:50:04.060 [2024-11-20 06:01:23.744919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:50:04.060 [2024-11-20 06:01:23.744934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:50:04.060 [2024-11-20 06:01:23.744951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:50:04.060 [2024-11-20 06:01:23.744960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.744968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:50:04.060 [2024-11-20 06:01:23.744976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:50:04.060 [2024-11-20 06:01:23.744984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:04.060 [2024-11-20 06:01:23.744992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:50:04.060 [2024-11-20 06:01:23.745019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:50:04.060 [2024-11-20 06:01:23.745028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:50:04.060 [2024-11-20 06:01:23.745044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:50:04.060 [2024-11-20 06:01:23.745052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:50:04.060 [2024-11-20 06:01:23.745069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:50:04.060 [2024-11-20 06:01:23.745077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:50:04.060 [2024-11-20 06:01:23.745093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:50:04.060 [2024-11-20 06:01:23.745102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:50:04.060 [2024-11-20 06:01:23.745118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:50:04.060 [2024-11-20 06:01:23.745143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:50:04.060 [2024-11-20 06:01:23.745166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:50:04.060 [2024-11-20 06:01:23.745174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:50:04.060 [2024-11-20 06:01:23.745191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:50:04.060 [2024-11-20 06:01:23.745199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:50:04.060 [2024-11-20 06:01:23.745217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:50:04.060 [2024-11-20 06:01:23.745225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:50:04.060 [2024-11-20 06:01:23.745234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:50:04.060 [2024-11-20 06:01:23.745243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:50:04.060 [2024-11-20 06:01:23.745251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:50:04.060 [2024-11-20 06:01:23.745259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:50:04.060 [2024-11-20 06:01:23.745270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:50:04.060 [2024-11-20 06:01:23.745282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:04.060 [2024-11-20 06:01:23.745293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:50:04.060 [2024-11-20 06:01:23.745302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:50:04.060 [2024-11-20 06:01:23.745311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:50:04.060 [2024-11-20 06:01:23.745320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:50:04.060 [2024-11-20 06:01:23.745329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:50:04.061 [2024-11-20 06:01:23.745338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:50:04.061 [2024-11-20 06:01:23.745347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:50:04.061 [2024-11-20 06:01:23.745356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:50:04.061 [2024-11-20 06:01:23.745421] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:50:04.061 [2024-11-20 06:01:23.745432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:50:04.061 [2024-11-20 06:01:23.745456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:50:04.061 [2024-11-20 06:01:23.745464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:50:04.061 [2024-11-20 06:01:23.745473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:50:04.061 [2024-11-20 06:01:23.745484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.745494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:50:04.061 [2024-11-20 06:01:23.745503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.786 ms 00:50:04.061 [2024-11-20 06:01:23.745512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.794080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.794160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:50:04.061 [2024-11-20 06:01:23.794177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.571 ms 00:50:04.061 [2024-11-20 06:01:23.794186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.794266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.794276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:50:04.061 [2024-11-20 06:01:23.794285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:50:04.061 [2024-11-20 06:01:23.794294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.852541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.852621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:50:04.061 [2024-11-20 06:01:23.852638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 58.241 ms 00:50:04.061 [2024-11-20 06:01:23.852648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.852752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.852764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:50:04.061 [2024-11-20 06:01:23.852775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:50:04.061 [2024-11-20 06:01:23.852789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.852964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.852979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:50:04.061 [2024-11-20 06:01:23.852989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:50:04.061 [2024-11-20 06:01:23.852997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.853046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.853055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:50:04.061 [2024-11-20 06:01:23.853065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:50:04.061 [2024-11-20 06:01:23.853073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.880651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.880726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:50:04.061 [2024-11-20 06:01:23.880741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.598 ms 00:50:04.061 [2024-11-20 06:01:23.880773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.880997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.881015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:50:04.061 [2024-11-20 06:01:23.881027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:50:04.061 [2024-11-20 06:01:23.881036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.925534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.925639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:50:04.061 [2024-11-20 06:01:23.925658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.550 ms 00:50:04.061 [2024-11-20 06:01:23.925684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.061 [2024-11-20 06:01:23.944216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.061 [2024-11-20 06:01:23.944419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:50:04.061 [2024-11-20 06:01:23.944469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:50:04.061 [2024-11-20 06:01:23.944478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.058742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.058989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:50:04.320 [2024-11-20 06:01:24.059028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 114.343 ms 00:50:04.320 [2024-11-20 06:01:24.059038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.059341] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:50:04.320 [2024-11-20 06:01:24.059535] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:50:04.320 [2024-11-20 06:01:24.059721] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:50:04.320 [2024-11-20 06:01:24.059917] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:50:04.320 [2024-11-20 06:01:24.059931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.059941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:50:04.320 [2024-11-20 06:01:24.059952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.782 ms 00:50:04.320 [2024-11-20 06:01:24.059961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.060102] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:50:04.320 [2024-11-20 06:01:24.060117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.060132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:50:04.320 [2024-11-20 06:01:24.060143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:50:04.320 [2024-11-20 06:01:24.060151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.091593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.091693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:50:04.320 [2024-11-20 06:01:24.091712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.472 ms 00:50:04.320 [2024-11-20 06:01:24.091723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.110533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.110635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:50:04.320 [2024-11-20 06:01:24.110654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:50:04.320 [2024-11-20 06:01:24.110664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.320 [2024-11-20 06:01:24.110863] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:50:04.320 [2024-11-20 06:01:24.111225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.320 [2024-11-20 06:01:24.111244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:50:04.320 [2024-11-20 06:01:24.111255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:50:04.320 [2024-11-20 06:01:24.111264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.888 [2024-11-20 06:01:24.509678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.888 [2024-11-20 06:01:24.509947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:50:04.888 [2024-11-20 06:01:24.509978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 397.017 ms 00:50:04.889 [2024-11-20 06:01:24.509994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.889 [2024-11-20 06:01:24.517048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.889 [2024-11-20 06:01:24.517211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:50:04.889 [2024-11-20 06:01:24.517234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:50:04.889 [2024-11-20 06:01:24.517246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.889 [2024-11-20 06:01:24.517686] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:50:04.889 [2024-11-20 06:01:24.517720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.889 [2024-11-20 06:01:24.517732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:50:04.889 [2024-11-20 06:01:24.517744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:50:04.889 [2024-11-20 06:01:24.517755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.889 [2024-11-20 06:01:24.517790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.889 [2024-11-20 06:01:24.517817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:50:04.889 [2024-11-20 06:01:24.517831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:50:04.889 [2024-11-20 06:01:24.517841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:04.889 [2024-11-20 06:01:24.517895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 407.817 ms, result 0 00:50:04.889 [2024-11-20 06:01:24.517953] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:50:04.889 [2024-11-20 06:01:24.518070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:04.889 [2024-11-20 06:01:24.518079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:50:04.889 [2024-11-20 06:01:24.518089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.119 ms 00:50:04.889 [2024-11-20 06:01:24.518097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.147 [2024-11-20 06:01:24.915529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.147 [2024-11-20 06:01:24.915636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:50:05.147 [2024-11-20 06:01:24.915657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 396.078 ms 00:50:05.147 [2024-11-20 06:01:24.915666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.147 [2024-11-20 06:01:24.922629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.147 [2024-11-20 06:01:24.922705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:50:05.147 [2024-11-20 06:01:24.922721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:50:05.147 [2024-11-20 06:01:24.922731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.147 [2024-11-20 06:01:24.923037] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:50:05.147 [2024-11-20 06:01:24.923064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.147 [2024-11-20 06:01:24.923075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:50:05.147 [2024-11-20 06:01:24.923086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.300 ms 00:50:05.147 [2024-11-20 06:01:24.923095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.147 [2024-11-20 06:01:24.923130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.147 [2024-11-20 06:01:24.923142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:50:05.147 [2024-11-20 06:01:24.923152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:50:05.147 [2024-11-20 06:01:24.923161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.147 [2024-11-20 06:01:24.923212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 406.033 ms, result 0 00:50:05.147 [2024-11-20 06:01:24.923273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:50:05.147 [2024-11-20 06:01:24.923286] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:50:05.147 [2024-11-20 06:01:24.923298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.923308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:50:05.148 [2024-11-20 06:01:24.923318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 814.031 ms 00:50:05.148 [2024-11-20 06:01:24.923328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.923367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.923379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:50:05.148 [2024-11-20 06:01:24.923395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:50:05.148 [2024-11-20 06:01:24.923405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.941866] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:50:05.148 [2024-11-20 06:01:24.942299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.942327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:50:05.148 [2024-11-20 06:01:24.942345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.907 ms 00:50:05.148 [2024-11-20 06:01:24.942357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.943260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.943302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:50:05.148 [2024-11-20 06:01:24.943322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:50:05.148 [2024-11-20 06:01:24.943332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.945865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.945907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:50:05.148 [2024-11-20 06:01:24.945920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.496 ms 00:50:05.148 [2024-11-20 06:01:24.945930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.946001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.946013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:50:05.148 [2024-11-20 06:01:24.946024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:50:05.148 [2024-11-20 06:01:24.946039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.946186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.946200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:50:05.148 [2024-11-20 06:01:24.946210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:50:05.148 [2024-11-20 06:01:24.946221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.946249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.946259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:50:05.148 [2024-11-20 06:01:24.946269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:50:05.148 [2024-11-20 06:01:24.946278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.946325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:50:05.148 [2024-11-20 06:01:24.946338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.946347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:50:05.148 [2024-11-20 06:01:24.946358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:50:05.148 [2024-11-20 06:01:24.946368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.946435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:05.148 [2024-11-20 06:01:24.946447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:50:05.148 [2024-11-20 06:01:24.946457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:50:05.148 [2024-11-20 06:01:24.946466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:05.148 [2024-11-20 06:01:24.948282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1269.493 ms, result 0 00:50:05.148 [2024-11-20 06:01:24.963317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:05.148 [2024-11-20 06:01:24.979337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:50:05.148 [2024-11-20 06:01:24.991510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:50:05.148 Validate MD5 checksum, iteration 1 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:50:05.148 06:01:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:50:05.405 [2024-11-20 06:01:25.120336] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:50:05.405 [2024-11-20 06:01:25.120483] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82371 ] 00:50:05.405 [2024-11-20 06:01:25.285420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:05.686 [2024-11-20 06:01:25.476704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:07.589  [2024-11-20T06:01:28.550Z] Copying: 503/1024 [MB] (503 MBps) [2024-11-20T06:01:28.550Z] Copying: 993/1024 [MB] (490 MBps) [2024-11-20T06:01:30.459Z] Copying: 1024/1024 [MB] (average 495 MBps) 00:50:10.540 00:50:10.540 06:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:50:10.540 06:01:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b0d863325d4ea4601719fe3d89189968 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b0d863325d4ea4601719fe3d89189968 != \b\0\d\8\6\3\3\2\5\d\4\e\a\4\6\0\1\7\1\9\f\e\3\d\8\9\1\8\9\9\6\8 ]] 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:50:12.456 Validate MD5 checksum, iteration 2 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:50:12.456 06:01:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:50:12.456 [2024-11-20 06:01:32.310373] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:50:12.456 [2024-11-20 06:01:32.310532] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82448 ] 00:50:12.714 [2024-11-20 06:01:32.480664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:12.973 [2024-11-20 06:01:32.641410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:50:14.877  [2024-11-20T06:01:35.365Z] Copying: 512/1024 [MB] (512 MBps) [2024-11-20T06:01:39.641Z] Copying: 1024/1024 [MB] (average 549 MBps) 00:50:19.722 00:50:19.722 06:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:50:19.722 06:01:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ec660c0ae414191e40a524d624c262d8 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ec660c0ae414191e40a524d624c262d8 != \e\c\6\6\0\c\0\a\e\4\1\4\1\9\1\e\4\0\a\5\2\4\d\6\2\4\c\2\6\2\d\8 ]] 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82334 ]] 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82334 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 82334 ']' 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 82334 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82334 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82334' 00:50:21.624 killing process with pid 82334 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 82334 00:50:21.624 06:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 82334 00:50:23.530 [2024-11-20 06:01:42.943574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:50:23.530 [2024-11-20 06:01:42.969452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.530 [2024-11-20 06:01:42.969658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:50:23.530 [2024-11-20 06:01:42.969707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:50:23.530 [2024-11-20 06:01:42.969733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.530 [2024-11-20 06:01:42.969785] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:50:23.530 [2024-11-20 06:01:42.975591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.530 [2024-11-20 06:01:42.975716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:50:23.530 [2024-11-20 06:01:42.975747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.764 ms 00:50:23.530 [2024-11-20 06:01:42.975757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.530 [2024-11-20 06:01:42.976132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.530 [2024-11-20 06:01:42.976149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:50:23.530 [2024-11-20 06:01:42.976160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.320 ms 00:50:23.530 [2024-11-20 06:01:42.976170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.530 [2024-11-20 06:01:42.979054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.530 [2024-11-20 06:01:42.979110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:50:23.530 [2024-11-20 06:01:42.979123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.866 ms 00:50:23.530 [2024-11-20 06:01:42.979133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.530 [2024-11-20 06:01:42.980352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.530 [2024-11-20 06:01:42.980384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:50:23.530 [2024-11-20 06:01:42.980396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.166 ms 00:50:23.531 [2024-11-20 06:01:42.980412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:42.999818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:42.999950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:50:23.531 [2024-11-20 06:01:42.999973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.339 ms 00:50:23.531 [2024-11-20 06:01:43.000000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.009929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.010149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:50:23.531 [2024-11-20 06:01:43.010175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.862 ms 00:50:23.531 [2024-11-20 06:01:43.010186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.010367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.010386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:50:23.531 [2024-11-20 06:01:43.010399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:50:23.531 [2024-11-20 06:01:43.010411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.030095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.030190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:50:23.531 [2024-11-20 06:01:43.030208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.677 ms 00:50:23.531 [2024-11-20 06:01:43.030218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.049450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.049682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:50:23.531 [2024-11-20 06:01:43.049706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.194 ms 00:50:23.531 [2024-11-20 06:01:43.049716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.068626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.068845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:50:23.531 [2024-11-20 06:01:43.068868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.859 ms 00:50:23.531 [2024-11-20 06:01:43.068877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.088401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.088603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:50:23.531 [2024-11-20 06:01:43.088627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.353 ms 00:50:23.531 [2024-11-20 06:01:43.088636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.088708] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:50:23.531 [2024-11-20 06:01:43.088731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:50:23.531 [2024-11-20 06:01:43.088744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:50:23.531 [2024-11-20 06:01:43.088754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:50:23.531 [2024-11-20 06:01:43.088763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:50:23.531 [2024-11-20 06:01:43.088952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:50:23.531 [2024-11-20 06:01:43.088961] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04afc1a3-e019-45a2-a482-026a8c70763e 00:50:23.531 [2024-11-20 06:01:43.088972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:50:23.531 [2024-11-20 06:01:43.088982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:50:23.531 [2024-11-20 06:01:43.088991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:50:23.531 [2024-11-20 06:01:43.089003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:50:23.531 [2024-11-20 06:01:43.089012] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:50:23.531 [2024-11-20 06:01:43.089023] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:50:23.531 [2024-11-20 06:01:43.089033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:50:23.531 [2024-11-20 06:01:43.089041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:50:23.531 [2024-11-20 06:01:43.089049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:50:23.531 [2024-11-20 06:01:43.089058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.089084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:50:23.531 [2024-11-20 06:01:43.089094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:50:23.531 [2024-11-20 06:01:43.089104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.114704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.114964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:50:23.531 [2024-11-20 06:01:43.114987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.578 ms 00:50:23.531 [2024-11-20 06:01:43.114998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.115730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:50:23.531 [2024-11-20 06:01:43.115749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:50:23.531 [2024-11-20 06:01:43.115760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.666 ms 00:50:23.531 [2024-11-20 06:01:43.115770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.201562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.531 [2024-11-20 06:01:43.201653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:50:23.531 [2024-11-20 06:01:43.201685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.531 [2024-11-20 06:01:43.201696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.201776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.531 [2024-11-20 06:01:43.201787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:50:23.531 [2024-11-20 06:01:43.201796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.531 [2024-11-20 06:01:43.201824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.201977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.531 [2024-11-20 06:01:43.201994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:50:23.531 [2024-11-20 06:01:43.202005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.531 [2024-11-20 06:01:43.202014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.202035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.531 [2024-11-20 06:01:43.202050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:50:23.531 [2024-11-20 06:01:43.202060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.531 [2024-11-20 06:01:43.202069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.531 [2024-11-20 06:01:43.360369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.531 [2024-11-20 06:01:43.360469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:50:23.531 [2024-11-20 06:01:43.360487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.531 [2024-11-20 06:01:43.360511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.791 [2024-11-20 06:01:43.490258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.490362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:50:23.792 [2024-11-20 06:01:43.490377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.490387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.490536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.490549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:50:23.792 [2024-11-20 06:01:43.490559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.490567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.490621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.490633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:50:23.792 [2024-11-20 06:01:43.490657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.490683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.490862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.490877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:50:23.792 [2024-11-20 06:01:43.490887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.490896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.490949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.490975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:50:23.792 [2024-11-20 06:01:43.490985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.490999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.491048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.491059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:50:23.792 [2024-11-20 06:01:43.491068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.491078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.491133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:50:23.792 [2024-11-20 06:01:43.491144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:50:23.792 [2024-11-20 06:01:43.491157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:50:23.792 [2024-11-20 06:01:43.491165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:50:23.792 [2024-11-20 06:01:43.491324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 522.833 ms, result 0 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:50:25.169 Remove shared memory files 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:50:25.169 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid82093 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:50:25.428 ************************************ 00:50:25.428 END TEST ftl_upgrade_shutdown 00:50:25.428 ************************************ 00:50:25.428 00:50:25.428 real 1m44.942s 00:50:25.428 user 2m25.133s 00:50:25.428 sys 0m26.680s 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:50:25.428 06:01:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@14 -- # killprocess 75025 00:50:25.428 Process with pid 75025 is not found 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@952 -- # '[' -z 75025 ']' 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@956 -- # kill -0 75025 00:50:25.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (75025) - No such process 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 75025 is not found' 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82602 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:50:25.428 06:01:45 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82602 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@833 -- # '[' -z 82602 ']' 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:25.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:50:25.428 06:01:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:25.428 [2024-11-20 06:01:45.278660] Starting SPDK v25.01-pre git sha1 57b682926 / DPDK 24.03.0 initialization... 00:50:25.428 [2024-11-20 06:01:45.278944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82602 ] 00:50:25.686 [2024-11-20 06:01:45.462207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:25.944 [2024-11-20 06:01:45.610214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:50:26.877 06:01:46 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:50:26.877 06:01:46 ftl -- common/autotest_common.sh@866 -- # return 0 00:50:26.877 06:01:46 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:50:27.135 nvme0n1 00:50:27.135 06:01:47 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:50:27.135 06:01:47 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:50:27.135 06:01:47 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:50:27.393 06:01:47 ftl -- ftl/common.sh@28 -- # stores=0c17aa9a-51ad-42aa-9424-e68c8a37685b 00:50:27.393 06:01:47 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:50:27.393 06:01:47 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c17aa9a-51ad-42aa-9424-e68c8a37685b 00:50:27.653 06:01:47 ftl -- ftl/ftl.sh@23 -- # killprocess 82602 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@952 -- # '[' -z 82602 ']' 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@956 -- # kill -0 82602 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@957 -- # uname 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82602 00:50:27.653 killing process with pid 82602 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82602' 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@971 -- # kill 82602 00:50:27.653 06:01:47 ftl -- common/autotest_common.sh@976 -- # wait 82602 00:50:31.035 06:01:50 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:50:31.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:31.035 Waiting for block devices as requested 00:50:31.035 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:50:31.035 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:50:31.035 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:50:31.294 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:50:36.560 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:50:36.560 06:01:56 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:50:36.560 Remove shared memory files 00:50:36.560 06:01:56 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:50:36.560 06:01:56 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:50:36.560 06:01:56 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:50:36.560 06:01:56 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:50:36.560 06:01:56 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:50:36.560 06:01:56 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:50:36.560 ************************************ 00:50:36.560 END TEST ftl 00:50:36.560 ************************************ 00:50:36.560 00:50:36.560 real 11m6.976s 00:50:36.560 user 13m58.384s 00:50:36.560 sys 1m29.172s 00:50:36.560 06:01:56 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:50:36.560 06:01:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:50:36.560 06:01:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:50:36.560 06:01:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:50:36.560 06:01:56 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:50:36.560 06:01:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:50:36.560 06:01:56 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:50:36.560 06:01:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:50:36.560 06:01:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:50:36.560 06:01:56 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:50:36.560 06:01:56 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:50:36.560 06:01:56 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:50:36.560 06:01:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:50:36.560 06:01:56 -- common/autotest_common.sh@10 -- # set +x 00:50:36.560 06:01:56 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:50:36.560 06:01:56 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:50:36.560 06:01:56 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:50:36.560 06:01:56 -- common/autotest_common.sh@10 -- # set +x 00:50:38.475 INFO: APP EXITING 00:50:38.475 INFO: killing all VMs 00:50:38.475 INFO: killing vhost app 00:50:38.475 INFO: EXIT DONE 00:50:38.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:39.299 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:50:39.299 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:50:39.299 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:50:39.299 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:50:39.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:40.127 Cleaning 00:50:40.127 Removing: /var/run/dpdk/spdk0/config 00:50:40.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:50:40.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:50:40.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:50:40.127 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:50:40.127 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:50:40.127 Removing: /var/run/dpdk/spdk0/hugepage_info 00:50:40.127 Removing: /var/run/dpdk/spdk0 00:50:40.127 Removing: /var/run/dpdk/spdk_pid57850 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58107 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58347 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58461 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58518 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58657 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58681 00:50:40.386 Removing: /var/run/dpdk/spdk_pid58896 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59020 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59127 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59260 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59379 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59419 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59455 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59531 00:50:40.386 Removing: /var/run/dpdk/spdk_pid59665 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60129 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60216 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60290 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60317 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60486 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60509 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60668 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60695 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60765 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60788 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60858 00:50:40.386 Removing: /var/run/dpdk/spdk_pid60881 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61093 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61135 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61224 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61418 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61519 00:50:40.386 Removing: /var/run/dpdk/spdk_pid61561 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62016 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62125 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62240 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62304 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62335 00:50:40.386 Removing: /var/run/dpdk/spdk_pid62419 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63061 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63103 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63595 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63703 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63825 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63883 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63909 00:50:40.386 Removing: /var/run/dpdk/spdk_pid63940 00:50:40.386 Removing: /var/run/dpdk/spdk_pid65828 00:50:40.386 Removing: /var/run/dpdk/spdk_pid65987 00:50:40.386 Removing: /var/run/dpdk/spdk_pid65991 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66007 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66080 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66084 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66102 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66224 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66233 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66245 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66317 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66327 00:50:40.386 Removing: /var/run/dpdk/spdk_pid66339 00:50:40.386 Removing: /var/run/dpdk/spdk_pid67860 00:50:40.386 Removing: /var/run/dpdk/spdk_pid67979 00:50:40.645 Removing: /var/run/dpdk/spdk_pid69404 00:50:40.645 Removing: /var/run/dpdk/spdk_pid70771 00:50:40.645 Removing: /var/run/dpdk/spdk_pid70916 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71053 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71179 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71327 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71407 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71560 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71946 00:50:40.645 Removing: /var/run/dpdk/spdk_pid71989 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72473 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72668 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72776 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72897 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72960 00:50:40.645 Removing: /var/run/dpdk/spdk_pid72987 00:50:40.645 Removing: /var/run/dpdk/spdk_pid73421 00:50:40.645 Removing: /var/run/dpdk/spdk_pid73498 00:50:40.645 Removing: /var/run/dpdk/spdk_pid73611 00:50:40.645 Removing: /var/run/dpdk/spdk_pid74064 00:50:40.645 Removing: /var/run/dpdk/spdk_pid74212 00:50:40.645 Removing: /var/run/dpdk/spdk_pid75025 00:50:40.645 Removing: /var/run/dpdk/spdk_pid75178 00:50:40.645 Removing: /var/run/dpdk/spdk_pid75443 00:50:40.645 Removing: /var/run/dpdk/spdk_pid75548 00:50:40.645 Removing: /var/run/dpdk/spdk_pid75901 00:50:40.645 Removing: /var/run/dpdk/spdk_pid76166 00:50:40.645 Removing: /var/run/dpdk/spdk_pid76561 00:50:40.645 Removing: /var/run/dpdk/spdk_pid76811 00:50:40.645 Removing: /var/run/dpdk/spdk_pid76941 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77005 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77137 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77169 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77239 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77440 00:50:40.645 Removing: /var/run/dpdk/spdk_pid77715 00:50:40.645 Removing: /var/run/dpdk/spdk_pid78100 00:50:40.645 Removing: /var/run/dpdk/spdk_pid78475 00:50:40.645 Removing: /var/run/dpdk/spdk_pid78880 00:50:40.645 Removing: /var/run/dpdk/spdk_pid79329 00:50:40.645 Removing: /var/run/dpdk/spdk_pid79483 00:50:40.645 Removing: /var/run/dpdk/spdk_pid79567 00:50:40.645 Removing: /var/run/dpdk/spdk_pid80103 00:50:40.645 Removing: /var/run/dpdk/spdk_pid80180 00:50:40.645 Removing: /var/run/dpdk/spdk_pid80593 00:50:40.645 Removing: /var/run/dpdk/spdk_pid80958 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81416 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81570 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81626 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81701 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81769 00:50:40.645 Removing: /var/run/dpdk/spdk_pid81840 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82093 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82183 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82257 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82334 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82371 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82448 00:50:40.645 Removing: /var/run/dpdk/spdk_pid82602 00:50:40.645 Clean 00:50:40.904 06:02:00 -- common/autotest_common.sh@1451 -- # return 0 00:50:40.904 06:02:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:50:40.904 06:02:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:40.904 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:50:40.904 06:02:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:50:40.904 06:02:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:50:40.904 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:50:40.904 06:02:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:50:40.904 06:02:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:50:40.904 06:02:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:50:40.904 06:02:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:50:40.904 06:02:00 -- spdk/autotest.sh@394 -- # hostname 00:50:40.904 06:02:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:50:41.201 geninfo: WARNING: invalid characters removed from testname! 00:51:07.750 06:02:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:11.943 06:02:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:14.472 06:02:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:17.019 06:02:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:19.548 06:02:39 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:22.079 06:02:41 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:51:24.625 06:02:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:51:24.625 06:02:43 -- spdk/autorun.sh@1 -- $ timing_finish 00:51:24.625 06:02:43 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:51:24.625 06:02:43 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:51:24.625 06:02:43 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:51:24.625 06:02:43 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:51:24.625 + [[ -n 5459 ]] 00:51:24.625 + sudo kill 5459 00:51:24.634 [Pipeline] } 00:51:24.650 [Pipeline] // timeout 00:51:24.655 [Pipeline] } 00:51:24.669 [Pipeline] // stage 00:51:24.675 [Pipeline] } 00:51:24.689 [Pipeline] // catchError 00:51:24.698 [Pipeline] stage 00:51:24.701 [Pipeline] { (Stop VM) 00:51:24.714 [Pipeline] sh 00:51:24.997 + vagrant halt 00:51:28.286 ==> default: Halting domain... 00:51:36.405 [Pipeline] sh 00:51:36.688 + vagrant destroy -f 00:51:39.976 ==> default: Removing domain... 00:51:40.246 [Pipeline] sh 00:51:40.544 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:51:40.552 [Pipeline] } 00:51:40.568 [Pipeline] // stage 00:51:40.572 [Pipeline] } 00:51:40.581 [Pipeline] // dir 00:51:40.585 [Pipeline] } 00:51:40.597 [Pipeline] // wrap 00:51:40.602 [Pipeline] } 00:51:40.614 [Pipeline] // catchError 00:51:40.622 [Pipeline] stage 00:51:40.624 [Pipeline] { (Epilogue) 00:51:40.636 [Pipeline] sh 00:51:40.914 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:51:47.489 [Pipeline] catchError 00:51:47.491 [Pipeline] { 00:51:47.504 [Pipeline] sh 00:51:47.787 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:51:47.788 Artifacts sizes are good 00:51:47.795 [Pipeline] } 00:51:47.810 [Pipeline] // catchError 00:51:47.821 [Pipeline] archiveArtifacts 00:51:47.828 Archiving artifacts 00:51:47.944 [Pipeline] cleanWs 00:51:47.956 [WS-CLEANUP] Deleting project workspace... 00:51:47.956 [WS-CLEANUP] Deferred wipeout is used... 00:51:47.964 [WS-CLEANUP] done 00:51:47.966 [Pipeline] } 00:51:47.980 [Pipeline] // stage 00:51:47.985 [Pipeline] } 00:51:47.999 [Pipeline] // node 00:51:48.003 [Pipeline] End of Pipeline 00:51:48.040 Finished: SUCCESS